-
-
Notifications
You must be signed in to change notification settings - Fork 31.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reduce the number of comparisons for range checking. #67741
Comments
Python's core is full of bound checks like this one in Objects/listobject.c: static PyObject *
list_item(PyListObject *a, Py_ssize_t i)
{
if (i < 0 || i >= Py_SIZE(a)) {
... Abner Fog's high-level language optimization guide, http://www.agner.org/optimize/optimizing_cpp.pdf in section 14.2 Bounds Checking, shows a way to fold this into a single check: - if (i < 0 || i >= Py_SIZE(a)) {
+ if ((unsigned)i >= (unsigned)(Py_SIZE(a))) {
if (indexerr == NULL) {
indexerr = PyUnicode_FromString(
"list index out of range"); The old generated assembly code looks like this: _list_item: The new disassembly looks like this: _list_item: Note, the new code not only saves a comparison/conditional-jump pair, it also avoids the need to adjust %rsp on the way in and the way out for a net savings of four instructions along the critical path. When we have good branch prediction, the current approach is very low cost; however, Abner Fog's recommendation is never more expensive, is sometimes cheaper, saves a possible misprediction, and reduces the total code generated. All in all, it is a net win. I recommend we put in a macro of some sort so that this optimization gets expressed exactly once in the code and so that it has a good clear name with an explanation of what it does. |
Yes, this is a technique commonly used in STL implementations. This is why sizes and indices in STL are unsigned. But in CPython implementation sizes are signed (Py_ssize_t). The problem with using this optimization (rather low-level than high-level) is that we need to know unsigned version of the type of compared values.
Here is a bug. The type of i and Py_SIZE(a) is Py_ssize_t, so when casted to unsigned int, highest bits are lost. The correct casting type is size_t. In changeset 5942fd9ab335 you introduced a bug. |
Yes, I had just seen that a early today and deciding whether to substitute size_t for the unsigned cast or whether to just revert. I believe size_t is guaranteed to hold any array index and that a cast from non-negative Py_ssize_t would not lose bits.
Wouldn't size_t always work for Py_ssize_t? |
Yes. But it wouldn't work for say off_t. The consistent way is always use size_t instead of Py_ssize_t. But this boat has sailed. |
I'm only proposing a bounds checking macro for the Py_ssize_t case which is what all of our IndexError tests look for. Also, please look at the attached deque fix. |
It looks correct to me, but I would change type and introduce few new variables to get rid of casts. |
Attaching a diff for the bounds checking Objects/listobject.c. |
Also attaching a bounds checking patch for deques. |
Parenthesis around Py_SIZE() are redundant. Are there any benchmarking results that show a speed up? Such microoptimization makes sense in tight loops, but optimized source code looks more cumbersome and errorprone. |
I think the source in listobject.c would be benefit from a well-named macro for this. That would provide the most clarity. For deques, I'll just put in the simple patch because it only applies to a place that is already doing unsigned arithmetic/comparisons. FWIW, I don't usually use benchmarking on these kinds of changes. The generated assembler is sufficiently informative. Benchmarking each tiny change risks getting trapped in a local minimum. Also, little timeit tests tend to branch the same way every time (which won't show the cost of prediction misses) and it tends to have all code and data in cache (so you don't see the effects of cache misses) and it risks tuning to a single processor (in my case a haswell). Instead, I look at the code generated by GCC and CLang to see that it does less work. |
New changeset 1e89094998b2 by Raymond Hettinger in branch 'default': |
My point is that if the benefit is too small (say < 5% in microbenchmarks), it |
FWIW, here is a small patch to show how this could can be done consistently and with code clarity. |
Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.
Show more details
GitHub fields:
bugs.python.org fields:
The text was updated successfully, but these errors were encountered: