-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[NativeAOT] linux-arm bring up #97729
Comments
Tagging subscribers to this area: @agocke, @MichalStrehovsky, @jkotas Issue DetailsThis is tracking issue for the known problems that need to be resolved to get working NativeAOT support on linux-arm platform. Known issues:
Other things requiring clean up:
|
State as of ba8993f + PRs #97746, #97756 and #97757:
|
I run the smoke tests in Release configuration. Some of them reliably fail which makes the debugging easier. Apparently we now get incorrect answer for -- printf("%x %x %x\r\n", (uintptr_t)&RhpAssignRefAVLocation, (uintptr_t)&RhpAssignRefAVLocation & ~1, faultingIP);
// prints "4b9539 4b9539 4b9538" Don't you just love compilers? (Technically, clang is not wrong here since
|
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
Please keep this issue on topic. I am doing this in my free time, I do not plan to work on win-x86 port. There's already an open issue for that. |
With the in-flight PRs I can get most of the smoke tests running in Release mode. There's one remaining issue with unwinding during GC in
The tests pass with |
This comment was marked as off-topic.
This comment was marked as off-topic.
#97863 fixes the unwinding issue in Release builds above. The test still crashes in pure Release configuration though. It passes when the Release Stack trace:
|
So, for the last crash in GC suspension I may need some help with verifying some assumptions. I can easily reproduce it and it's happening at the same point in the same function:
Decoding the GCInfo on the GC thread indeed shows that there's a live variable in register R12, and since it's a scratch register, (cc @VSadov) |
Safe points should be only created for call returns. Scratch registers cannot be live at call returns. It is why they are not handled for safe point. (@VSadov is changing some of these invariants in #95565.) What does the code around the safe point look like? It may be useful to generate JIT dump for the method in question to see why the JIT decided to emit the safe point at this spot. |
It's this code (the offsets are one-off, ie. +85 is really +86):
My suspicion is that the |
Right now threads can only be interrupted in interruptible code. Volatile registers can contain GC refs there. Also threads can self-interrupt when hitting a hijacked return.Volatile regs are dead, but return registers may contain live GC refs. After that it is unwinding through return sites when returns did not happen yet. Volatile registers are dead. Since there are no calls around the interruption location in you sample, it is must be in fully interruptible method. |
The crash happens only when the C runtime part is built with optimizations, so most likely there's an issue with decoding the GC info (the compiler is very eager to optimize out alignments when wrong pointer type is used). I'll dump the GC info.
I really hope it's just misdecoded GC info... because a fully interruptible method should not use the R12 register (or we would need to save it from the frame which is trivial). |
The part that C optimizations matter is suspicious indeed. For the r12 register i do not recall if its use for scratch is forbidden (can’t easily check that right now). |
It is fine for fully interruptible methods to use R12 register to store GC reference. I think it is a bug that it is not initialized in the REGDISPLAY in |
Thanks. I came to the same conclusion. The GCInfo shows that it's fully interruptible method. I'll send a PR. |
@am11 : thank you very much for what you provided. I think this will be quite helpful. I am now trying using WSL, docker and images provided by I am currently testing with building the default console program natively (no AOT yet), expecting linux-x64 using It seems there is something wrong with nuget configuration in the docker image that MS provides, or (more probably) I am missing something. I will try a little bit more and then I think I'll turn to your solution, thanks a lot for providing it! EDIT: Why there is not even a .nuget folder in the MS image is beyond me. |
Hello @am11 ,
Using "llvm" instead of "llvm14" at However, when doing: If fact this is exactly the same error that I got when trying with mcr.microsoft.com/dotnet/nightly/sdk:9.0-preview-jammy-aot + adding the .nget config file. So probably that llvm14 is required but I cannot manage to pass this failure with liblldb-3.9-dev... |
Hey @sonatique, sorry for the confusion. I was testing these steps in a container and stitched together as a Dockefile, should had tested the final version. 😅 I've now updated the docker and executed all steps in WSL, lets give it another try! Changes:
|
@am11 : trying now. Thanks a lot! |
@am11 . I am still getting the same issue. I must be doing something wrong.
any idea by chance? Note that I am on a x64 machine. Since Windows ARM64 is not very common I didn't mention it earlier, but I see no arm* "supported emulations" in the list, I wonder why. In case you wonder, here is my csproj file:
and I just have the default program.cs generated by Thanks in advance! |
First try the exact Dockerfile, if that works (which it does in two machines i've tested on) then customize to your project. |
@am11 : well obviously I should have started here, because, as you expected, everything complete without error with your full dockerfile. I was a bit bold thinking I could directly do what I wanted. EDIT: I have been able to achieve what I wanted, super great, thanks! |
Thanks again @am11 ! Now that I better understand what I am doing and that I have a working setup, I am curious about one of the things you wrote earlier. You said you "installed lld, it was either that or apt install binutils-arm" and later you wrote the same about llvm. So you appear to have made efforts to avoid "binutils-arm", though you wrote "binuitls/bfd is slightly better at size optimization". So I am wondering: what would be wrong in "simply" using binutils? I have to admit I tried for some time to install binutils and remove or replace Do you have any pointer regarding how to use binutils? I wish I could compare binary produced by both lld/llvm and binutils. Thanks in advance if you have time for this low priority question. |
gcc toolchain (gcc, binutils etc.) are architecture-specific, while llvm toolchain is multiarch. Meaning to cross-compile stuff, lld from llvm toolchain of host architecture will do the job, while gcc requires target arch specific package, e.g. See the previous attempt of using gcc toolchian in crosscompile: #78559, the conclusion was it's best to stick with llvm toolchain for cross-compilation. The difference in size is a few KBs and it's not meaningful in grand scheme of things. |
Hi @am11 , OK I see, thanks a lot! |
Thanks a lot for the Dockerfile, @am11, and for asking the questions, @sonatique! With the provided Dockerfile, we can utilize vscode dev containers to develop net9.0 apps in the IDE and compile it AOT for linux-arm. It works great! Awesome work on the native AOT for linux-arm. It speeds up my application a lot. Set it up like this:
{
"build": { "dockerfile": "Dockerfile" },
"customizations": {
"vscode": {
"extensions": ["ms-dotnettools.csdevkit"]
}
}
}
FROM --platform=$BUILDPLATFORM ubuntu:latest AS builder
RUN apt update && apt install -y clang debootstrap curl lld llvm
RUN mkdir /dev/arm; \
curl -sSL https://raw.githubusercontent.com/dotnet/arcade/main/eng/common/cross/arm/sources.list.jammy -o /dev/arm/sources.list.jammy; \
curl -sSL https://raw.githubusercontent.com/dotnet/arcade/main/eng/common/cross/build-rootfs.sh |\
bash /dev/stdin arm jammy llvm15 lldb15
RUN mkdir -p "$HOME/.dotnet" "$HOME/.nuget/NuGet";
RUN curl -sSL https://dot.net/v1/dotnet-install.sh | bash /dev/stdin --quality daily --channel 9.0;
RUN cat > "$HOME/.nuget/NuGet/NuGet.Config" <<EOF
<?xml version="1.0" encoding="utf-8"?>
<configuration>
<packageSources>
<add key="nuget.org" value="https://api.nuget.org/v3/index.json" protocolVersion="3" />
<add key="dotnet9" value="https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet9/nuget/v3/index.json" />
</packageSources>
EOF
ENV DOTNET_NOLOGO=1
ENV PATH "$PATH:/root/.dotnet" Then you can use vscode tasks to publish the app, like this:
{
"version": "2.0.0",
"tasks": [
{
"command": "dotnet",
"args": [
"publish",
"application.csproj",
"-r",
"linux-arm",
"-c",
"Release",
"--self-contained",
"true",
"-o",
"${workspaceFolder}/out",
"-p:PublishSingleFile=false",
"-p:EnableCompressionInSingleFile=true",
"-p:PublishAot=true",
"-p:LinkerFlavor=lld",
"-p:ObjCopy=llvm-objcopy",
"-p:SysRoot=\"/.tools/rootfs/arm\""
],
"options": {
"cwd": "${workspaceFolder}"
},
"group": "build",
"label": "dotnet9 publish release AOT"
}
]
} |
@tunger, very nice! Thanks for sharing. After #101213 is merged, we can use the same mechanism for - bash /dev/stdin arm jammy llvm15 lldb15
+ bash /dev/stdin arm alpine llvm15 lldb15 and in tasks.json: "-r",
- "linux-arm",
+ "linux-musl-arm", Once .NET 9 is shipped, we would be able to use the prebuilt official docker images which will exempt setting up cross environment. One reason of recommending official images is slightly(⚓) important because the more people start using this kind of experimental solution, the more confusion it's going to cause. For instance; this dockerfile (for linux-arm and not for linux-musl-arm) requires 'nested virtualization' support for chroot (fakechroot has some issues so I tested this docker on a few public CI systems, here is the support situation:
⚓ It is only about "building" the image, dotnet-publish does not use chroot. |
@am11, I just followed your instructions from your comment #97729 (comment) step by step, but unfortunately, docker build fails with the output below. I tried your steps inside a Ubuntu 22.04 WSL (with docker installed) on a Windows 11 x64 host machine. Is it possible that this file changed in the meantime?: I get the same error at the exact same step if i try to run each command directly in my WSL without using docker at all. /tmp/arm-builder$ docker build . -t armv7-nativeaot-webapi
[+] Building 26.6s (7/8) docker:default
=> [internal] load build definition from Dockerfile 0.2s
=> => transferring dockerfile: 1.30kB 0.0s
=> [internal] load metadata for docker.io/library/ubuntu:latest 1.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [1/5] FROM docker.io/library/ubuntu:latest@sha256:3f85b7caad41a95462cf5b787d8a04604c8262cdcdf9a472b8c52ef83375fe15 0.0s
=> CACHED [2/5] RUN apt update && apt install -y clang debootstrap curl lld llvm 0.0s
=> CACHED [3/5] RUN mkdir -p "$HOME/.dotnet9" "$HOME/.nuget/NuGet"; curl -sSL https://dot.net/v1/dotnet-install.sh | bash /dev/stdin --quality dai 0.0s
=> ERROR [4/5] RUN mkdir /dev/arm; curl -sSL https://raw.githubusercontent.com/dotnet/arcade/main/eng/common/cross/arm/sources.list.jammy -o /dev 25.4s
------
> [4/5] RUN mkdir /dev/arm; curl -sSL https://raw.githubusercontent.com/dotnet/arcade/main/eng/common/cross/arm/sources.list.jammy -o /dev/arm/sources.list.jammy; curl -sSL https://raw.githubusercontent.com/dotnet/arcade/main/eng/common/cross/build-rootfs.sh | bash /dev/stdin arm jammy llvm15 lldb15:
1.016 I: Retrieving InRelease
1.406 I: Checking Release signature
1.413 I: Valid Release signature (key id F6ECB3762474EDA9D21B7022871920D1991BC93C)
1.834 I: Retrieving Packages
2.035 I: Validating Packages
2.159 I: Resolving dependencies of required packages...
2.301 I: Resolving dependencies of base packages...
3.377 I: Checking component main on http://ports.ubuntu.com...
3.634 I: Retrieving adduser 3.118ubuntu5
4.366 I: Validating adduser 3.118ubuntu5
4.381 I: Retrieving apt 2.4.5
4.589 I: Validating apt 2.4.5
........
24.28 I: Extracting usrmerge...
24.29 I: Extracting util-linux...
24.33 I: Extracting zlib1g...
24.66 W: Failure trying to run: chroot "/crossrootfs/arm" /bin/true
24.66 W: See /crossrootfs/arm/debootstrap/debootstrap.log for details
------
Dockerfile:20
--------------------
19 |
20 | >>> RUN mkdir /dev/arm; \
21 | >>> curl -sSL https://raw.githubusercontent.com/dotnet/arcade/main/eng/common/cross/arm/sources.list.jammy -o /dev/arm/sources.list.jammy; \
22 | >>> curl -sSL https://raw.githubusercontent.com/dotnet/arcade/main/eng/common/cross/build-rootfs.sh |\
23 | >>> bash /dev/stdin arm jammy llvm15 lldb15
24 |
--------------------
ERROR: failed to solve: process "/bin/sh -c mkdir /dev/arm; curl -sSL https://raw.githubusercontent.com/dotnet/arcade/main/eng/common/cross/arm/sources.list.jammy -o /dev/arm/sources.list.jammy; curl -sSL https://raw.githubusercontent.com/dotnet/arcade/main/eng/common/cross/build-rootfs.sh | bash /dev/stdin arm jammy llvm15 lldb15" did not complete successfully: exit code: 1 |
On WSL I had to manually update the binfmts configuration to register the QEMU emulators. You may want to check |
Still the same failure ( sudo update-binfmts --display
sudo update-binfmts --enable |
|
Yes, it was a typo and the display switch outputs:
|
That doesn't list any of the QEMU user packages. There would be an entry similar to:
That means the QEMU user emulators are not installed properly and thus the chroot binaries in the debootstrap process cannot execute. I don't remember anymore how I fixed this but hopefully this points in the right direction to Google and fix the problem. |
I was using this docker https://github.com/am11/CrossRepoCITesting/blob/master/linux-arm-aot/Dockerfile and this workflow https://github.com/am11/CrossRepoCITesting/blob/master/.github/workflows/docker-naot-arm32.yml. I found that running |
Okay, thanks for all the information. My specific issue from comment #97729 (comment) was resolved by doing Will do more tests on Tuesday and post a final solution in the case i find one for me. |
I finally was able to compile an app for an embedded Debian 11 (bullseye) system by following these setup steps on the build machine (Azure Devops Services, Microsoft Hosted Agent, vmImage: ubuntu-20.04): sudo apt install -y clang debootstrap curl lld llvm qemu-user-static binfmt-support
mkdir -p "$HOME/.dotnet9" "$HOME/.nuget/NuGet"
curl -sSL https://dot.net/v1/dotnet-install.sh | bash /dev/stdin --quality daily --channel 9.0 --install-dir "$HOME/.dotnet9"
cat > "$HOME/.nuget/NuGet/NuGet.Config" <<EOF
<?xml version="1.0" encoding="utf-8"?>
<configuration>
<packageSources>
<add key="nuget.org" value="https://api.nuget.org/v3/index.json" protocolVersion="3" />
<add key="dotnet9" value="https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet9/nuget/v3/index.json" />
</packageSources>
</configuration>
EOF
export DOTNET_NOLOGO=1
export ROOTFS_DIR=/crossrootfs/arm
sudo mkdir /dev/arm
curl -sSL https://raw.githubusercontent.com/dotnet/arcade/main/eng/common/cross/arm/sources.list.focal -o /dev/arm/sources.list.focal
curl -sSL https://raw.githubusercontent.com/dotnet/arcade/main/eng/common/cross/build-rootfs.sh | sudo -E bash /dev/stdin arm focal llvm10 lldb10 Then running the cross compile build with: /home/vsts/.dotnet9/dotnet publish /path/to/csharp-project --property:RuntimeIdentifier="linux-arm" --property:TargetFramework="net9.0" --property:PublishAot="true" --property:Configuration="Release" --property:LinkerFlavor="lld" --property:ObjCopy="llvm-objcopy" --property:SysRoot="/crossrootfs/arm" -o /desired/output/directory As mentioned in this comment, I had to use Ubuntu 20.04 and |
Another question from my side: Does anyone know if there will be "official" support for natively cross-compiling to linux-arm for .NET 9 and maybe also newer versions? The question comes up, as we are currently evaluating if it will be safe to use this functionality in a consumer product. I mean will .NET 9 be internally testet against NativeAOT support for linux-arm and will it be maintained and bug-fixed after .NET 9 is released? Unfortunately, I could not find any announcement or anything else about this. |
As it happened, there already are tags for arm32 with .NET 9 SDK https://hub.docker.com/_/microsoft-dotnet-nightly-sdk/. 👌😎 # run a throw-away-after-use (--rm) container interactively for linux/arm/v7,
# while mounting the current-working-directory to /myapp
$ docker run --rm --platform linux/arm/v7 -v$(pwd):/myapp -w /myapp -it \
mcr.microsoft.com/dotnet/nightly/sdk:9.0-preview
# inside the container
$ uname -a
Linux ea9e301095ba 6.6.26-linuxkit #1 SMP Sat Apr 27 04:13:19 UTC 2024 armv7l GNU/Linux
$ dotnet --info
Linux ea9e301095ba 6.6.26-linuxkit #1 SMP Sat Apr 27 04:13:19 UTC 2024 armv7l GNU/Linux
root@ea9e301095ba:/# dotnet --info
.NET SDK:
Version: 9.0.100-preview.4.24266.28
Commit: 75a08fda5c
Workload version: 9.0.100-manifests.2c9affbd
MSBuild version: 17.11.0-preview-24225-01+bd0b1e466
Runtime Environment:
OS Name: debian
OS Version: 12
OS Platform: Linux
RID: linux-arm
Base Path: /usr/share/dotnet/sdk/9.0.100-preview.4.24266.28/
.NET workloads installed:
There are no installed workloads to display.
Host:
Version: 9.0.0-preview.4.24260.3
Architecture: arm
Commit: 2270e3185f
.NET SDKs installed:
9.0.100-preview.4.24266.28 [/usr/share/dotnet/sdk]
.NET runtimes installed:
Microsoft.AspNetCore.App 9.0.0-preview.4.24260.3 [/usr/share/dotnet/shared/Microsoft.AspNetCore.App]
Microsoft.NETCore.App 9.0.0-preview.4.24260.3 [/usr/share/dotnet/shared/Microsoft.NETCore.App]
Other architectures found:
None
Environment variables:
Not set
global.json file:
Not found
Learn more:
https://aka.ms/dotnet/info
Download .NET:
https://aka.ms/dotnet/download On macOS arm64, I run into qemu assertion #97729 (comment). If you are on the x64 host with docker installed, you can use one of these tags with prerequisites for cross compilation: https://github.com/dotnet/versions/blob/main/build-info/docker/image-info.dotnet-dotnet-buildtools-prereqs-docker-main.json e.g. $ docker run -e ROOTFS_DIR=/crossrootfs/arm --rm -it mcr.microsoft.com/dotnet-buildtools/prereqs:cbl-mariner-2.0-cross-arm
# install dotnet in it You can also create a Dockerfile to make it ready. FROM mcr.microsoft.com/dotnet-buildtools/prereqs:cbl-mariner-2.0-cross-arm
# install dotnet (dotnet-script install)
# now warm up NativeAOT so ilc packages are ready to use
RUN dotnet new webapiaot -n warmupapp && dotnet publish --project warmupapp && rm -rf warmupapp then build this image and tag it However, if you are on non-x64 machine like arm64, or you wanted the bleeding-edge daily build then you can build the builder image from scratch as discussed above. |
I tried using the
I assume this is caused by the change for year 2038 support. Using a newer docker image fixed this issue. My full Dockerfile: FROM mcr.microsoft.com/dotnet-buildtools/prereqs:azurelinux-3.0-cross-arm-net9.0
RUN curl -sSL https://dot.net/v1/dotnet-install.sh | bash /dev/stdin --quality preview --channel 9.0 --install-dir "$HOME/.dotnet9"
RUN ln -s /root/.dotnet9/dotnet /usr/bin/dotnet
ENV ROOTFS_DIR=/crossrootfs/arm
# now warm up NativeAOT so ilc packages are ready to use
RUN dotnet new webapiaot -n warmupapp && dotnet publish warmupapp -r linux-arm -p:LinkerFlavor=lld -p:ObjCopy=llvm-objcopy -p:SysRoot=$ROOTFS_DIR && rm -rf warmupapp |
Yea, use its successor |
This is tracking issue for the known problems that need to be resolved to get working NativeAOT support on linux-arm platform.
Known issues:
FEATURE_64BIT_ALIGNMENT
may be unhandled (ref: [NativeAOT] Linux/ARM bring-up (4/n) #97269 (comment))InWriteBarrierHelper
returns incorrect answer on Release builds(likely not needed or we have a test gap)RhpInitialInterfaceDispatch
is missing AV value checkRhGetCodeTarget
doesn't handle PC-relativemovw/movt
correctlyFailing runtime tests:
Other things requiring clean up:
The text was updated successfully, but these errors were encountered: