Skip to content

Conversation

@pganssle
Copy link

This fixes the bug and updates the tests.

Now that the bug is fixed, the try/catch can be removed.
@pganssle
Copy link
Author

I'm not a core dev so I can't push directly to your branch.

@izbyshev
Copy link
Owner

@pganssle Thank you for actually fixing the problem! Would you prefer to add tests and merge it separately as the PR for bpo-34481 or combine our PRs and fix both issues at the same time?

@pganssle
Copy link
Author

Doesn't really matter to me, but thinking about it more, we should probably do it as two separate PRs (I'll wait for yours to get merged), so that I can easily update it in response to review comments.

@izbyshev
Copy link
Owner

@pganssle You also need to handle PyUnicode_AsUTF8AndSize() inside the loop (for "%Z"), otherwise, strftime may still fail:

>>> import datetime as dt
>>> tzinfo = dt.timezone(dt.timedelta(), '\ud800')
>>> t = dt.time(tzinfo=tzinfo)
>>> t.strftime("%Z")
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
UnicodeEncodeError: 'utf-8' codec can't encode character '\ud800' in position 0: surrogates not allowed

This is now also a more complete fix for bpo-6697.
This will cut down on some of the per-environment variability of this
function.
This passes a backslash-escaped unicode string to strftime in the event
that locale encoding fails. To ensure that the relevant string is
round-trippable, all backslashes in the original string are
double-escaped, and then unescaped when the result is decoded.
This reverts commit c1381d5.
@pganssle pganssle closed this Oct 23, 2018
izbyshev pushed a commit that referenced this pull request Dec 9, 2020
* bpo-40791: Make compare_digest more constant-time.

The existing volatile `left`/`right` pointers guarantee that the reads will all occur, but does not guarantee that they will be _used_. So a compiler can still short-circuit the loop, saving e.g. the overhead of doing the xors and especially the overhead of the data dependency between `result` and the reads. That would change performance depending on where the first unequal byte occurs. This change removes that optimization.

(This is change #1 from https://bugs.python.org/issue40791 .)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants