-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Signed byte is incorrectly transferred over interop on iOS/Catalyst ARM64 #100891
Comments
|
@huoyaoyuan the static-cast code was just to show what happens to a roundtripped value, but we have code deeper down that checks ranges of the sbyte (in the specific case -12..12) that fails for all negative numbers. The issue only occured on ios and catalyst when running on ARM64. Windows ARM64 and Android ARM64 are just fine |
I wonder if the same thing happens with System.Int16 (short) <-> int16_t? |
@rolfbjarne we have not seen this happen with shorts. |
@jakobbotsch Do we have the sign and zero extensions implemented correctly in RyuJIT for Apple Arm64 ABI? |
Seems like we do. I tested Regarding Mono, the problem seems reproducible only if the native library is built with
Removing the size optimization flag when building the native library makes Mono produce the expected result as well. We should look into why Mono fails when the library is built with |
I don't see where RyuJIT takes explicit care of this normalization. RyuJIT in general keeps all values in registers normalized up to 32 bits, which makes it very hard/impossible to hit issues around this with IL created from C#. But you can hit this with RyuJIT as well for IL based programs on both arm32 (which has the same ABI requirement) and apple arm64. I opened #101046 about it. |
Looking at what apple clang produces with This: #include <stdint.h>
int32_t foo (int8_t i)
{
return i;
}
int32_t bar (int8_t i);
int32_t call_bar (int8_t i, int8_t j)
{
return bar(i + j);
} Produces the following LLVM IR with
You get the same IR with apple clang targeting the standard ARM64 ABI:
However the final assembly differs. Apple API:
Linux ABI:
Conclusion: we're probably not passing the correct target triple to something in our backend. either |
InvestigationI think the problem is that we are not decorating params/arguments fewer than 32bits with the correct LLVM IR attributes. If we considering a simple #include <cstdint>
extern "C" {int32_t SByteToInt(int8_t value);}
int main() {
int result = SByteToInt(-5);
return result;
} Compiled with: ; Function Attrs: noinline norecurse optnone ssp uwtable(sync)
define i32 @main() #0 {
%1 = alloca i32, align 4
%2 = alloca i32, align 4
store i32 0, ptr %1, align 4
%3 = call i32 @SByteToInt(i8 noundef signext -5)
store i32 %3, ptr %2, align 4
%4 = load i32, ptr %2, align 4
ret i32 %4
}
declare i32 @SByteToInt(i8 noundef signext) #1 This gives the following assembly: _main:
0000000000000000 sub sp, sp, #0x20
0000000000000004 stp x29, x30, [sp, #0x10]
0000000000000008 add x29, sp, #0x10
000000000000000c stur wzr, [x29, #-0x4]
0000000000000010 mov w0, #-0x5 // <-- correct
0000000000000014 bl 0x14 However, when we use Mono with LLVM that PInvokes the same function we generate: ; Function Attrs: noinline
define hidden monocc i32 @Program_Program_EONEConvert() #2 gc "coreclr" !dbg !89 {
...
gc.safepoint_poll.exit: ; preds = %BB0, %gc.safepoint_poll.poll.i
%2 = notail call monocc i32 @p_8_plt_Program_Program_SByteToInt_sbyte_llvm(i8 -5), !dbg !90, !managed_name !91
ret i32 %2, !dbg !92
}
declare hidden i32 @p_8_plt_Program_Program_SByteToInt_sbyte_llvm(i8) local_unnamed_addr #0 which gives following assembly: _Program_Program_MyConvert:
0000000000000290 stp x29, x30, [sp, #-0x10]!
0000000000000294 adrp x8, 0 ; 0x0
0000000000000298 ldr x8, [x8]
000000000000029c ldr x8, [x8]
00000000000002a0 cbnz x8, 0x2b4
00000000000002a4 mov w0, #0xfb // <-- wrong
00000000000002a8 bl 0x2a8 SolutionAdapt https://github.com/dotnet/runtime/blob/main/src/mono/mono/mini/mini-llvm.c#L2333 (and other places) to properly decorate params/args with the correct set of https://llvm.org/docs/LangRef.html#parameter-attributes FWIW Using Program_EONEConvert:
00000001003e8280 stp x29, x30, [sp, #-0x10]!
00000001003e8284 mov x29, sp
00000001003e8288 adrp x16, 2243 ; 0x100cab000
00000001003e828c add x16, x16, #0xa50
00000001003e8290 ldr x0, [x16, #0x38]
00000001003e8294 ldr x17, [x0]
00000001003e8298 cbz x17, 0x1003e82a0
00000001003e829c bl plt
00000001003e82a0 mov x0, #-0x5
00000001003e82a4 bl plt_Program_SByteToInt_sbyte |
Description
When sending an
sbyte
to a C++ library (int8_t
), on iOS and MacCatalyst ARM64, the sbyte is not sent across correctly.Reproduction Steps
MauiTestApp\MauiTestApp.csproj
and run it on an iOS Device or iOS ARM64 simulatorThat value comes from sending an sbyte=-5 to native code, let the native code convert it to an int32 and return it again. But because something goes wrong at the interop level, the value gets sent across as 251. This seems to only affect sbyte going into native code - the second part of the code tests signed bytes coming out of native code and handles negative values just fine.
A Windows version of the repro is also provided (I tested Windows ARM64 and Android ARM64 and these don't exhibit this behavior).
C++ code:
C# code:
Expected behavior
signed bytes gets sent across the interop correctly on iOS ARM64 like other platforms.
Actual behavior
-5
gets incorrectly transferred.Regression?
No response
Known Workarounds
don't use sbyte, but transfer as int32_t and cast once in native code.
Configuration
.NET 8 ARM64 iOS and Catalyst
Other information
Interesting details for Apple ARM64 ABI: https://developer.apple.com/documentation/xcode/writing-arm64-code-for-apple-platforms#Pass-arguments-to-functions-correctly
I wonder if .NET zero-extends our sbyte instead of sign-extending it.
-5
in two's complimentint8_t
would be1111 1011
Zero-extended to
int32_t
would be0000 0000 0000 0000 0000 0000 1111 1011
which is251
The text was updated successfully, but these errors were encountered: