-
Notifications
You must be signed in to change notification settings - Fork 12.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[lldb] IRMemoryMap zero address mapping fix #99045
Conversation
Some tests check error receiving after nullptr dereference during lldb exression. On RISCV targets IRMemoryMap could map zero address during allocations taking into account that host process doesn't dereference nullptr during normal workflow making dereference of zero address valid. This patch adds a check to avoid cases like this. Fixed tests for RISCV target: TestEnumTypes.EnumTypesTestCase TestAnonymous.AnonymousTestCase
Thank you for submitting a Pull Request (PR) to the LLVM Project! This PR will be automatically labeled and the relevant teams will be If you wish to, you can add reviewers by using the "Reviewers" section on this page. If this is not working for you, it is probably because you do not have write If you have received no comments on your PR for a week, you can request a review If you have further questions, they may be answered by the LLVM GitHub User Guide. You can also ask questions in a comment on this PR, on the LLVM Discord or on the forums. |
@llvm/pr-subscribers-lldb Author: None (dlav-sc) ChangesSome tests check error receiving after nullptr dereference during lldb exression. On RISCV targets IRMemoryMap could map zero address during allocations taking into account that host process doesn't dereference nullptr during normal workflow making dereference of zero address valid. This patch adds a check to avoid cases like this. Fixed tests for RISCV target: TestEnumTypes.EnumTypesTestCase Full diff: https://github.com/llvm/llvm-project/pull/99045.diff 1 Files Affected:
diff --git a/lldb/source/Expression/IRMemoryMap.cpp b/lldb/source/Expression/IRMemoryMap.cpp
index de631370bb048..a15645e2b5b48 100644
--- a/lldb/source/Expression/IRMemoryMap.cpp
+++ b/lldb/source/Expression/IRMemoryMap.cpp
@@ -126,6 +126,10 @@ lldb::addr_t IRMemoryMap::FindSpace(size_t size) {
} else {
ret = region_info.GetRange().GetRangeEnd();
}
+ } else if (ret == 0x0) {
+ // Get other region if zero address. Should't map zero address to
+ // get an error in cases when user tries to dereference nullptr
+ ret = region_info.GetRange().GetRangeEnd();
} else if (ret + size < region_info.GetRange().GetRangeEnd()) {
return ret;
} else {
|
Is this something specific to risc-v or simply uncovered by testing against a certain risc-v target? Just wondering why we haven't had to do this before now. |
When we're connected to a stub that can allocate memory in the target, none of this code is executed. So lldb-server/debugserver will not hit this. We send So I'm guessing we have (1) a target that cannot allocate memory, (2) a target that supported Can you show what your |
To make sure I'm clear: I don't have a problem with the basic idea of the change, although we could comment what is going on more clearly, and I'm curious about that qMemoryRegionInfo packet. But it looks like you're connecting to a device which can't allocate memory through gdb remote serial protocol (common for jtag/vm type stubs), but can report memory region info (much less common, it's an lldb extension I believe), and the memory region at address 0 is reported as being inaccessible (reasonable). So the IRMemoryMap is trying to use address 0 and that overlaps with actual addresses seen in the process, leading to confusion. Most similar environments, that can't allocate memory, don't support qMemoryRegionInfo so we pick one of our bogus addresses ranges (in higher memory) to get a better chance of avoiding actual program addresses. |
I connect to a riscv (rv64gc) machine, nothing special. Lldb can just allocate memory on remote target only if the target supports executing JIT-compiled code ( The problem is that riscv targets currently don't support JIT-compiled code execution, but can only execute simple lldb expressions interpreting its IR, like for example
or
If in the case above
I take this example from Logs you have asked for: Without patch:
With patch:
Actually, I have solved the problem with this patch: #99336. It adds the ability to make function calls inside lldb expressions, including jitted code execution support for riscv targets. I thought maybe this patch may be usefull for other architectures that can't execute JIT code. |
I'd describe this patch as handling the case where a remote stub cannot jit memory and has a qMemoryRegion info packet which does not specify permissions. The patch you reference makes jitted code expressions work, but I think IRMemoryMap::FindSpec only needs to find space in the inferior virtual address space -- allocate memory -- and that was already working, wasn't it? Also, your before & after packets show that you've got permissions returned for your qMemoryRegionInfo responses now, which you don't before -- you won't hit this failure because of that change. One thing I'd say is that this patch is looking for "is this a region with read, write, or execute permissions", so it can skip that. It does this with,
but
or more simply, change the while (true) loop to first check if all permissions are No in a region,
|
I think we should change these checks to look for an explicitly inaccessible memory region, like
and I also do think there is value in adding a special case for address 0. Even if we have an inaddressable memory block at address 0 which should be eligible for shadowing with lldb's host memory, using that is a poor choice because people crash at address 0 all the time and we don't want references to that address finding the IRMemoryMap host side memory values.
|
I haven't tested it (or even tried to compile it, lol) but I think this loop might be expressable as simply
I dropped two behaviors here - one is that it would emit a unique assert if qMemoryRegionInfo worked once, but failed for a different address. The second is that I think the old code would try to combine consecutive memory regions to make one block large enough to satisfy the size requirement (not sure it was doing this correctly). |
Ah, but I see my misunderstanding as I look at what debugserver returns. It uses "no permissions" to indicate a memory range that is unmapped --
0x100008000-0x000000010040c000 is a free address range in this example, and what this loop should choose if it had to find an unused memory range. That conflicts with the behavior that this PR started with originally, where no permissions were returned for any memory regions --- which looks like the way debugserver reports a free memory region. (The low 4 GB segment at the start of my output above is the PAGEZERO empty segment of virtual address space on 64-bit darwin processes) |
I put up a PR that avoids a memory region starting at address 0, and also clarifies the documentation of the qMemoryRegionInfo packet to state that permissions: are required for all memory regions that are accessible by the inferior process. A memory region with no |
I've merged #100288 I think we should close this PR. |
I've checked your patch on my machine and it works, so looks good to me.
Yep, lets close this PR. |
Some tests check error receiving after nullptr dereference during lldb exression. On RISCV targets IRMemoryMap could map zero address during allocations taking into account that host process doesn't dereference nullptr during normal workflow making dereference of zero address valid. This patch adds a check to avoid cases like this.
Fixed tests for RISCV target:
TestEnumTypes.EnumTypesTestCase
TestAnonymous.AnonymousTestCase