-
Notifications
You must be signed in to change notification settings - Fork 165
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
clang build fails with inline ASM on NEON64 (Apple M1) #96
Comments
Apparently it only happens when compiling with something like |
So the problem is that the following set of four registers, which together form the lookup table, are not sequentially numbered:
That sucks, because as you mention, the code goes to great lengths to load that table into four hardcoded sequential registers: For some unclear reason, the compiler chooses to rename those registers when returning from the function. I was really hoping that any reasonable compiler would never do that, because the hardcoded registers are already taken and the table stays live for the duration of the encoder. Yet here we are. My little gambit failed. Testing a fix sucks, because I don't have an ARM64 machine that I can test on, and even then I'm not sure that I can reproduce the bug. The silver lining is that Could you try changing line 28 to this:
|
Another thing to try is to add the __attribute__((always_inline))
static inline uint8x16x4_t
load_64byte_table (const uint8_t *p)
{
#ifdef BASE64_NEON64_USE_ASM I believe that |
Both suggestions result in the same compiler errors. FWIW I don't have an arm64 device handy either, so I just installed and used clang (v14) with an aarch64 sysroot (https://developer.arm.com/-/media/Files/downloads/gnu-a/10.3-2021.07/binrel/gcc-arm-10.3-2021.07-x86_64-aarch64-none-linux-gnu.tar.xz). |
Here's the command line I'm using (from the project root) to test FWIW (on Linux):
|
Thanks for linking to the sysroot and for sharing your script! Those will be useful in the future. I was able to reproduce the bug and also affirm your conclusions that my proposed fixes don't work. This looks like a nasty bug. Even when I inline the table-loading code into the encoder loop, the bug appears. Even when I don't create a I'm unsure of how to fix this, other than to rewrite the whole encoder logic in assembly. (That was something that I was actually planning on, because it would let me interleave loads and stores more naturally.) Maybe the best fix for the time being is indeed the one you pushed: to just disable inline asm for clang when not optimizing. |
Convert the full encoding loop to an inline assembly implementation for systems that can use inline assembly. The motivation for this work is that when optimization was turned off on recent versions of clang, the encoding table would not be loaded into sequential registers (issue #96). This happened despite taking pains to ensure that the compiler would use an explicit set of registers (v8-v11). Finding ourselves at the bottom of our bag of tricks, and faced with a real bug, we were left with no option than to reimplement the _entire_ encoding loop in inline assembly. It is the only way to get full control over the loading of the encoding table. Thankfully, aarch64 assembly is not very difficult to write by hand. In making this change, we can/should add some optimizations in the loop unrolling for rounds >= 8. The unrolled loop should optimize pipeline efficiency by interleaving memory operations (like loads and stores) with data operations (like table lookups). The best way to achieve this is to blend the unrolled loops such that one loop prefetches the registers needed in the next loop. To make that possible without duplicating massive amounts of code, we abstract the various assembly blocks into preprocessor macros and instantiate them as needed. This mixing of the preprocessor with inline assembly is perhaps a bit gnarly, but I think the usage is simple enough that the advantages (code reuse) outweigh the disadvantages. Code was tested on a Debian VM running under QEMU. Unfortunately this does not let us see how the actual bare metal performance increases/decreases.
Convert the full encoding loop to an inline assembly implementation for systems that can use inline assembly. The motivation for this work is that when optimization is turned off on recent versions of clang, the encoding table would not be loaded into sequential registers (see issue #96). This happened despite taking pains to ensure that the compiler uses an explicit set of registers for the load (v8-v11). This leaves us with not much options beside rewriting the full encoding loop in inline assembly. Only that way can we be absolutely certain that the register usage is always correct. Thankfully, aarch64 assembly is not very difficult to write by hand. In making this change, we can/should add some optimizations in the loop unrolling for rounds >= 8. The unrolled loop should optimize pipeline efficiency by interleaving memory operations (like loads and stores) with data operations (like table lookups). The best way to achieve this is to blend the unrolled loops such that one loop prefetches the registers needed in the next loop. To make that possible without duplicating massive amounts of code, we abstract the various assembly blocks into preprocessor macros and instantiate them as needed. This mixing of the preprocessor with inline assembly is perhaps a bit gnarly, but I think the usage is simple enough that the advantages (code reuse) outweigh the disadvantages. Code was tested on a Debian VM running under QEMU. Unfortunately this does not let us see how the actual bare metal performance increases/decreases.
Convert the full encoding loop to an inline assembly implementation for systems that can use inline assembly. The motivation for this work is that when optimization is turned off on recent versions of clang, the encoding table would not be loaded into sequential registers (see issue #96). This happened despite taking pains to ensure that the compiler uses an explicit set of registers for the load (v8-v11). This leaves us with not much options beside rewriting the full encoding loop in inline assembly. Only that way can we be absolutely certain that the register usage is always correct. Thankfully, aarch64 assembly is not very difficult to write by hand. In making this change, we can/should add some optimizations in the loop unrolling for rounds >= 8. The unrolled loop should optimize pipeline efficiency by interleaving memory operations (like loads and stores) with data operations (like table lookups). The best way to achieve this is to blend the unrolled loops such that one loop prefetches the registers needed in the next loop. To make that possible without duplicating massive amounts of code, we abstract the various assembly blocks into preprocessor macros and instantiate them as needed. This mixing of the preprocessor with inline assembly is perhaps a bit gnarly, but I think the usage is simple enough that the advantages (code reuse) outweigh the disadvantages. Code was tested on a Debian VM running under QEMU. Unfortunately this does not let us see how the actual bare metal performance increases/decreases.
Yesterday I set up a small AArch64 Debian VM using I've created a new issue (#98) for this enhancement and also pushed a testing branch, This was the nuclear option, but also the only solution I saw to fixing this bug. I was not hopeful that I could find any more tricks to get the compiler to generate the correct code by itself. |
Convert the full encoding loop to an inline assembly implementation for systems that can use inline assembly. The motivation for this work is that when optimization is turned off on recent versions of clang, the encoding table would not be loaded into sequential registers (see issue #96). This happened despite taking pains to ensure that the compiler uses an explicit set of registers for the load (v8-v11). This leaves us with not much options beside rewriting the full encoding loop in inline assembly. Only that way can we be absolutely certain that the register usage is always correct. Thankfully, aarch64 assembly is not very difficult to write by hand. In making this change, we can/should add some optimizations in the loop unrolling for rounds >= 8. The unrolled loop should optimize pipeline efficiency by interleaving memory operations (like loads and stores) with data operations (like table lookups). The best way to achieve this is to blend the unrolled loops such that one loop prefetches the registers needed in the next loop. To make that possible without duplicating massive amounts of code, we abstract the various assembly blocks into preprocessor macros and instantiate them as needed. This mixing of the preprocessor with inline assembly is perhaps a bit gnarly, but I think the usage is simple enough that the advantages (code reuse) outweigh the disadvantages. Code was tested on a Debian VM running under QEMU. Unfortunately this does not let us see how the actual bare metal performance increases/decreases.
Convert the full encoding loop to an inline assembly implementation for systems that can use inline assembly. The motivation for this work is that when optimization is turned off on recent versions of clang, the encoding table would not be loaded into sequential registers (see issue #96). This happened despite taking pains to ensure that the compiler uses an explicit set of registers for the load (v8-v11). This leaves us with not much options beside rewriting the full encoding loop in inline assembly. Only that way can we be absolutely certain that the register usage is always correct. Thankfully, aarch64 assembly is not very difficult to write by hand. In making this change, we can/should add some optimizations in the loop unrolling for rounds >= 8. The unrolled loop should optimize pipeline efficiency by interleaving memory operations (like loads and stores) with data operations (like table lookups). The best way to achieve this is to blend the unrolled loops such that one loop prefetches the registers needed in the next loop. To make that possible without duplicating massive amounts of code, we abstract the various assembly blocks into preprocessor macros and instantiate them as needed. This mixing of the preprocessor with inline assembly is perhaps a bit gnarly, but I think the usage is simple enough that the advantages (code reuse) outweigh the disadvantages. Code was tested on a Debian VM running under QEMU. Unfortunately this does not let us see how the actual bare metal performance increases/decreases.
Convert the full encoding loop to an inline assembly implementation for systems that can use inline assembly. The motivation for this work is that when optimization is turned off on recent versions of clang, the encoding table would not be loaded into sequential registers (see issue #96). This happened despite taking pains to ensure that the compiler uses an explicit set of registers for the load (v8..v11). This leaves us with not much options beside rewriting the full encoding loop in inline assembly. Only that way can we be absolutely certain that the register usage is always correct. Thankfully, aarch64 assembly is not very difficult to write by hand. In making this change, we can/should add some optimizations in the loop unrolling for rounds >= 8. The unrolled loop should optimize pipeline efficiency by interleaving memory operations (like loads and stores) with data operations (like table lookups). The best way to achieve this is to blend the unrolled loops such that one loop prefetches the registers needed in the next loop. To make that possible without duplicating massive amounts of code, we abstract the various assembly blocks into preprocessor macros and instantiate them as needed. This mixing of the preprocessor with inline assembly is perhaps a bit gnarly, but I think the usage is simple enough that the advantages (code reuse) outweigh the disadvantages. Code was tested on a Debian VM running under QEMU. Unfortunately this does not let us see how the actual bare metal performance increases/decreases.
Convert the full encoding loop to an inline assembly implementation for systems that can use inline assembly. The motivation for this work is that when optimization is turned off on recent versions of clang, the encoding table would not be loaded into sequential registers (see issue #96). This happened despite taking pains to ensure that the compiler uses an explicit set of registers for the load (v8..v11). This leaves us with not much options beside rewriting the full encoding loop in inline assembly. Only that way can we be absolutely certain that the register usage is always correct. Thankfully, aarch64 assembly is not very difficult to write by hand. In making this change, we can/should add some optimizations in the loop unrolling for rounds >= 8. The unrolled loop should optimize pipeline efficiency by interleaving memory operations (like loads and stores) with data operations (like table lookups). The best way to achieve this is to blend the unrolled loops such that one loop prefetches the registers needed in the next loop. To make that possible without duplicating massive amounts of code, we abstract the various assembly blocks into preprocessor macros and instantiate them as needed. This mixing of the preprocessor with inline assembly is perhaps a bit gnarly, but I think the usage is simple enough that the advantages (code reuse) outweigh the disadvantages. Code was tested on a Debian VM running under QEMU. Unfortunately this does not let us see how the actual bare metal performance increases/decreases.
Convert the full encoding loop to an inline assembly implementation for systems that can use inline assembly. The motivation for this work is that when optimization is turned off on recent versions of clang, the encoding table would not be loaded into sequential registers (see issue #96). This happened despite taking pains to ensure that the compiler uses an explicit set of registers for the load (v8..v11). This leaves us with not much options beside rewriting the full encoding loop in inline assembly. Only that way can we be absolutely certain that the register usage is always correct. Thankfully, aarch64 assembly is not very difficult to write by hand. In making this change, we can/should add some optimizations in the loop unrolling for rounds >= 8. The unrolled loop should optimize pipeline efficiency by interleaving memory operations (like loads and stores) with data operations (like table lookups). The best way to achieve this is to blend the unrolled loops such that one loop prefetches the registers needed in the next loop. To make that possible without duplicating massive amounts of code, we abstract the various assembly blocks into preprocessor macros and instantiate them as needed. This mixing of the preprocessor with inline assembly is perhaps a bit gnarly, but I think the usage is simple enough that the advantages (code reuse) outweigh the disadvantages. Code was tested on a Debian VM running under QEMU. Unfortunately this does not let us see how the actual bare metal performance increases/decreases.
Convert the full encoding loop to an inline assembly implementation for systems that can use inline assembly. The motivation for this work is that when optimization is turned off on recent versions of clang, the encoding table would not be loaded into sequential registers (see issue #96). This happened despite taking pains to ensure that the compiler uses an explicit set of registers for the load (v8..v11). This leaves us with not much options beside rewriting the full encoding loop in inline assembly. Only that way can we be absolutely certain that the register usage is always correct. Thankfully, aarch64 assembly is not very difficult to write by hand. In making this change, we can/should add some optimizations in the loop unrolling for rounds >= 8. The unrolled loop should optimize pipeline efficiency by interleaving memory operations (like loads and stores) with data operations (like table lookups). The best way to achieve this is to blend the unrolled loops such that one loop prefetches the registers needed in the next loop. To make that possible without duplicating massive amounts of code, we abstract the various assembly blocks into preprocessor macros and instantiate them as needed. This mixing of the preprocessor with inline assembly is perhaps a bit gnarly, but I think the usage is simple enough that the advantages (code reuse) outweigh the disadvantages. Code was tested on a Debian VM running under QEMU. Unfortunately this does not let us see how the actual bare metal performance increases/decreases.
Convert the full encoding loop to an inline assembly implementation for compilers that support inline assembly. The motivation for this change is issue #96: when optimization is turned off on recent versions of clang, the encoding table is sometimes not loaded into sequential registers. This happens despite taking pains to ensure that the compiler uses an explicit set of registers for the load (v8..v11). This leaves us with not much options beside rewriting the full encoding loop in inline assembly. Only then can we be absolutely certain that the right registers are used. Thankfully, AArch64 assembly is not very difficult to write by hand. In making this change, we optimize the unrolled loops for rounds >= 8 by interleaving memory operations (loads, stores) with data operations (arithmetic, table lookups). Splitting these two classes of instructions avoids pipeline stalls and data dependencies. The current loop iteration also prefetches the data needed in the next iteration. To allow that without duplicating massive amounts of code, we abstract the various assembly blocks into preprocessor macros and instantiate them as needed. This mixing of the preprocessor with inline assembly is perhaps a bit gnarly, but I think the usage is simple enough that the advantages (code reuse) outweigh the disadvantages. Code was tested on a Debian VM running under QEMU. Unfortunately, testing in a VM does not let us measure the actual performance impact.
clang must not be allocating
l3
in a contiguous register? While building 3eab8e6, the compiler errors are:The text was updated successfully, but these errors were encountered: