Skip to content

Commit 29bb227

Browse files
authored
Add a minor Fraction.__hash__() optimization (pythonGH-15313)
* Add a minor `Fraction.__hash__` optimization that got lost in the shuffle. Document the optimizations.
1 parent 0567786 commit 29bb227

File tree

1 file changed

+17
-2
lines changed

1 file changed

+17
-2
lines changed

Lib/fractions.py

+17-2
Original file line numberDiff line numberDiff line change
@@ -564,10 +564,25 @@ def __hash__(self):
564564
try:
565565
dinv = pow(self._denominator, -1, _PyHASH_MODULUS)
566566
except ValueError:
567-
# ValueError means there is no modular inverse
567+
# ValueError means there is no modular inverse.
568568
hash_ = _PyHASH_INF
569569
else:
570-
hash_ = hash(abs(self._numerator)) * dinv % _PyHASH_MODULUS
570+
# The general algorithm now specifies that the absolute value of
571+
# the hash is
572+
# (|N| * dinv) % P
573+
# where N is self._numerator and P is _PyHASH_MODULUS. That's
574+
# optimized here in two ways: first, for a non-negative int i,
575+
# hash(i) == i % P, but the int hash implementation doesn't need
576+
# to divide, and is faster than doing % P explicitly. So we do
577+
# hash(|N| * dinv)
578+
# instead. Second, N is unbounded, so its product with dinv may
579+
# be arbitrarily expensive to compute. The final answer is the
580+
# same if we use the bounded |N| % P instead, which can again
581+
# be done with an int hash() call. If 0 <= i < P, hash(i) == i,
582+
# so this nested hash() call wastes a bit of time making a
583+
# redundant copy when |N| < P, but can save an arbitrarily large
584+
# amount of computation for large |N|.
585+
hash_ = hash(hash(abs(self._numerator)) * dinv)
571586
result = hash_ if self._numerator >= 0 else -hash_
572587
return -2 if result == -1 else result
573588

0 commit comments

Comments
 (0)