Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rebase of aarch64 PR #1414

Open
wants to merge 19 commits into
base: master
Choose a base branch
from
Open

Conversation

mihails-strasuns
Copy link

Sorry for the delay - here is the retested version of aarch64. Differences from #1302:

  1. Rebased on the latest master
  2. Removed few platform support commits which are not part of core aarch64 support and I couldn't test myself
  3. Uses Add aarch64 unit tests kpatch-unit-test-objs#51 for unit tests which adds few more regression tests

Should be testable with the same kernel support series from https://lore.kernel.org/linux-arm-kernel/20220524001637.1707472-1-madvenka@linux.microsoft.com/ using any applicable base version.

swine and others added 19 commits September 5, 2024 11:32
Do we need more robust architecture protection (Issue dynup#1356)

The elf.h reloc-type constants are not unique across archs
  #define R_PPC64_REL24           10      /* PC relative 26 bit */
  #define R_X86_64_32             10      /* Direct 32 bit zero extended */
so to avoid any unexpected aliasing, guard all R_arch_type refs
with a check on kelf->arch, or a global default arch set from
the first elf encountered.
The "has_function_profiling" support field in the symbol struct is used to show
that a function symbol is able to be patched. This is necessary to check that
functions which need to be patched are able to be.

On arm64 this means the presence of 2 NOP instructions at function entry
which are patched by ftrace to call the ftrace handling code. These 2 NOPs
are inserted by the compiler and the location of them is recorded in a
section called "__patchable_function_entries". Check whether a symbol has a
corresponding entry in the "__patchable_function_entries" section and if so
mark it as "has_func_profiling".

Signed-off-by: Suraj Jitindar Singh <surajjs@amazon.com>

---

V1->V2:
 - Make error message standard across architectures when no patchable entry
 - Don't store __patchable_function_entries section in
   kpatch_find_func_profiling_calls(), instead find it each time
…d populate

The function kpatch_create_mcount_sections() allocates the __mcount_loc section
and then populates it with functions which have a patchable entry.

The following patch will add aarch64 support to this function where the
allocation will have to be done before the kelf_patched is torn down.
Thus split this function so that the allocation can be performed earlier and
the populating as before.

No intended functional change.

Signed-off-by: Suraj Jitindar Singh <surajjs@amazon.com>

---

V1->V2:
 - Add patch to series
…arch64

The __mcount_loc section contains the addresses of patchable ftrace sites which
is used by the ftrace infrastructure in the kernel to create a list of tracable
functions and to know where to patch to enable tracing of them. On aarch64 this
section is called __patchable_function_entries and is generated by the
compiler. Either of __mcount_loc or __patchable_function_entries is recognised
by the kernel but for aarch64 use __patchable_function_entries as it is what
is expected.

Add aarch64 support to kpatch_alloc_mcount_sections(). The SHF_LINK_ORDER
section flag must be copied to ensure that it matches to avoid the following:

ld: __patchable_function_entries has both ordered [...] and unordered [...] sections

Add aarch64 support to kpatch_populate_mcount_sections(). Check for the 2
required NOP instructions on function entry, which may be preceded by a BTI C
instruction depending on whether the function is a leaf function. This
determines the offset of the patch site.

Signed-off-by: Suraj Jitindar Singh <surajjs@amazon.com>

---

V1->V2:
 - Don't preserve the __patchable_function_entries section from the patched elf
   as this is already verified by kpatch_check_func_profiling_calls()
 - Instead get the patch entry offset by checking for a preceding BTI C instr
 - Copy the section flags for __patchable_function_entries

---

rebased, added sh_link fix from Suraj's later commit
  "kpatch-build: Enable ARM64 support"

Signed-off-by: Pete Swain <swine@google.com>
Add the final support required for aarch64 and enable building on that arch.

Signed-off-by: Suraj Jitindar Singh <surajjs@amazon.com>

---

V1->V2:
 - Add # shellcheck disable=SC2086
 - Add comment to kpatch_is_mapping_symbol()
On aarch64, only the ASSERT_RTNL macro is affected by source line number
changes (WARN, BUG, etc. no longer embed line numbers in the instruction
stream.)  A small test function that invokes the macro for a line change
from 42 to 43:

 0000000000000000 <test_assert_rtnl>:
    0:	d503245f 	bti	c
    4:	d503201f 	nop
    8:	d503201f 	nop
    c:	d503233f 	paciasp
   10:	a9bf7bfd 	stp	x29, x30, [sp, #-16]!
   14:	910003fd 	mov	x29, sp
   18:	94000000 	bl	0 <rtnl_is_locked>
 			18: R_AARCH64_CALL26	rtnl_is_locked
   1c:	34000080 	cbz	w0, 2c <test_assert_rtnl+0x2c>
   20:	a8c17bfd 	ldp	x29, x30, [sp], dynup#16
   24:	d50323bf 	autiasp
   28:	d65f03c0 	ret
   2c:	90000000 	adrp	x0, 0 <test_assert_rtnl>
 			2c: R_AARCH64_ADR_PREL_PG_HI21	.data.once
   30:	39400001 	ldrb	w1, [x0]
 			30: R_AARCH64_LDST8_ABS_LO12_NC	.data.once
   34:	35ffff61 	cbnz	w1, 20 <test_assert_rtnl+0x20>
   38:	52800022 	mov	w2, #0x1                   	// dynup#1
   3c:	90000001 	adrp	x1, 0 <test_assert_rtnl>
 			3c: R_AARCH64_ADR_PREL_PG_HI21	.rodata.str1.8+0x8
   40:	39000002 	strb	w2, [x0]
 			40: R_AARCH64_LDST8_ABS_LO12_NC	.data.once
   44:	91000021 	add	x1, x1, #0x0
 			44: R_AARCH64_ADD_ABS_LO12_NC	.rodata.str1.8+0x8
-  48:	52800542 	mov	w2, #0x2a                  	// dynup#42
+  48:	52800562 	mov	w2, #0x2b                  	// dynup#43
   4c:	90000000 	adrp	x0, 0 <test_assert_rtnl>
 			4c: R_AARCH64_ADR_PREL_PG_HI21	.rodata.str1.8+0x20
   50:	91000000 	add	x0, x0, #0x0
 			50: R_AARCH64_ADD_ABS_LO12_NC	.rodata.str1.8+0x20
   54:	94000000 	bl	0 <__warn_printk>
 			54: R_AARCH64_CALL26	__warn_printk
   58:	d4210000 	brk	#0x800
   5c:	17fffff1 	b	20 <test_assert_rtnl+0x20>

Create an implementation of kpatch_line_macro_change_only() for aarch64
modeled after the other architectures.  Only look for relocations to
__warn_printk that ASSERT_RTNL invokes.

Based-on-s390x-code-by: C. Erastus Toe <ctoe@redhat.com>
Signed-off-by: Joe Lawrence <joe.lawrence@redhat.com>
Update the kpatch-unit-test-objs submodule reference to add aarch64 unit
tests.

Signed-off-by: Joe Lawrence <joe.lawrence@redhat.com>
It seems mapping symbols in aarch64 elf has section size of 0.
So, exclude it in section symbol replacing code just like
kpatch_correlate_symbols().
This fixes the data-read-mostly unit test on aarch64.

Signed-off-by: Misono Tomohiro <misono.tomohiro@jp.fujitsu.com>
Copy from kernel source tree.

Signed-off-by: Misono Tomohiro <misono.tomohiro@jp.fujitsu.com>
new clang toolchain on arm64 produces individual __patchable_function_entries
sections for each patchable func, in -ffunction-sections mode,
rather than traditional single __mcount_loc section.

Bend the existing logic to detect this multiplicity in the incoming
kelf objects, and allocate N identical one-entry sections.
These are retrieved as needed by a new function: find_nth_section_by_name()
and attached to the .text sections they describe.

These __pfe section are not actually arm64-specific, but a generic
enhancement across gcc & clang, to allow better garbage collection of
unreferenced object sections, and mcount/pfe objects which refer to them.
The __pfe sections are combined in kernel-or-module final link,
from 5.19.9's 9440155ccb948f8e3ce5308907a2e7378799be60.
From clang-11, __pfe is supported for x86, though not yet used by kernel

The split between allocate/populate phases here is necessary to
enumerate/populate the outgoing section-headers before beginning to
produce output sections

Also adds some missing \n to log_debug()s

Signed-off-by: Pete Swain <swine@google.com>
On arm64, kpatch_find_func_profiling_calls() was skipping
leaf functions, with no relocations, so they weren't patchable.

Here other archs need to walk a function's reloc entries to
check for __fentry__ or __mcount, so it's valid to skip over
functions without sym->sec->rela, because they cannot be patchable,
else they would have at least an __fentry__ call relocation.

But arm64 marks functions patchable in a different way, with per-func
__patchable_function_entries sections referring _to_ the func,
not relocations _within_ the func, so a function w/o relocations
for text or data can still be patchable.

Move the sym->sec->rela check to the per-arch paths.

This allows gcc-static-local-var-5.patch
to generate livepatch, on arm64 & x86

Suggested-By: Bill Wendling <morbo@google.com>
Signed-off-by: Pete Swain <swine@google.com>
Signed-off-by: Pete Swain <swine@google.com>
New toolchain/arch, new conventions for section/label/etc names

gcc's .LCx symbols point to string literals in '.rodata.<func>.str1.*'
sections. Clang creates similar .Ltmp%d symbols in '.rodata.str'

The function is_string_literal_section()
generalized (too much?) to match either
- clang's/arm64 /^\.rodata\.str$/
- gcc's /^\.rodata\./ && /\.str1\./

Various matchers for .data.unlikely .bss.unlikely
replaced by is_data_unlikely_section()
generalized to match
- gcc's ".data.unlikely"
- clang's ".(data|bss).module_name.unlikely"

.data.once handled similarly

Signed-off-by: Pete Swain <swine@google.com>
Generalized kpatch_line_macro_change_only() & insn_is_load_immediate()
to collapse the aarch64 support back into parent.

I'm assuming the 3rd start1 of the original
	/* Verify mov w2 <line number> */
	if (((start1[offset] & 0b11111) != 0x2) || (start1[offset+3] != 0x52) ||
	    ((start1[offset] & 0b11111) != 0x2) || (start2[offset+3] != 0x52))
was a typo for start2.

That's now absorbed into insn_is_load_immediate() leaving just one
aarch64-specific piece: thinning out the match-list for diagnosing a
__LINE__ reference, to just "__warn_printf".
If CONFIG_UBSAN is enabled, ubsan section (.data..Lubsan_{data,type})
can be created. Keep them unconditionally.

NOTE: This patch needs to be verified.

Signed-off-by: Misono Tomohiro <misono.tomohiro@jp.fujitsu.com>
Initialize add_off earlier, so it's obviously never used uninitialized.
Clang was warning on this, even if gcc was not.
No functional change, the only path which left it undefined would call
ERROR() anyway.
In ARM64, every function section should have its own pfe section.

It is a bug in GCC 11/12 which will only generate a single pfe
section for all functions. The bug has been fixed in GCC 13.1.

As the create-diff-object is generating the pfe sections on its own,
we should also fix this bug, instead of try to repeat the bug.

--
Adjusted whitespace in Zimao's proposed code.

Signed-off-by: Pete Swain <swine@google.com>
Signed-off-by: Mihails Strasuns <mstrasun@amazon.com>
@mihails-strasuns
Copy link
Author

(dynup/kpatch-unit-test-objs#51 should be merged first)

@mihails-strasuns
Copy link
Author

AFAIU there are no publicly available arm64 github runners first, that remains my main concern at the moment as these unit tests are not actually being run. Would you consider adding a self-hosted one provided by AWS?

@jpoimboe
Copy link
Member

jpoimboe commented Sep 5, 2024

AFAIU there are no publicly available arm64 github runners first, that remains my main concern at the moment as these unit tests are not actually being run. Would you consider adding a self-hosted one provided by AWS?

Yes that would be fine.

@mihails-strasuns
Copy link
Author

Great! Have sent an e-mail with more details to your RH address.

@jpoimboe
Copy link
Member

As we discussed offline, the arm64 unit tests run fine on x86 so there's currently not a need for self-hosted runners.

@mihails-strasuns
Copy link
Author

Are there any other concerns?

@jpoimboe
Copy link
Member

I/we have been traveling last week and this one for conferences but I will try to review after that.

@mihails-strasuns
Copy link
Author

Ping :)

@joe-lawrence
Copy link
Contributor

@mihails-strasuns : sorry for delay, I was at conference and then PTO for the last few weeks. I'll try to take look this week.

@joe-lawrence
Copy link
Contributor

@mihails-strasuns : I am considering an intermediate step for the aarch64 support here: #1415

PowerPC is already supported by kpatch-build and for some newer kernel toolset/configurations creating patchable_function_entry sections. My thought was to extract the final* implementation details for those sections from @swine's patchset and modify/generalize them for ppc64le. *This would collapse some of the commits in the aarch64 patchset that doubled back after learning new information (like the multi_pfe flag and the generic kpatch_line_macro_change_only()).

My hope is that merging patchable_function_entry support first renders the aarch64 patchset easier to maintain/review.

I haven't tried laying the aarch64 parts back on top of #1415 yet, but I was wondering:

Thanks.

@mihails-strasuns
Copy link
Author

Thanks for having a look! Your plan makes sense to me.

which commits or code related to: (2) "Removed few platform support commits which are not part of core aarch64 support and I couldn't test myself"

Out of my head I remember 776056a, will diff exact commit list later. This PR matches Amazon Linux kpatch package patch set almost exactly so it was something I was very confident about actually working :)

for kernel support, do you know if there is a recent (or rebased) version of Madhaven's kernel patch

I know that @puranjaymohan has been working on rebasing to a newer version but had to switch at some point. He is probably the best contact about it from AL side.

@jpoimboe
Copy link
Member

for kernel support, do you know if there is a recent (or rebased) version of Madhaven's kernel patch

I know that @puranjaymohan has been working on rebasing to a newer version but had to switch at some point. He is probably the best contact about it from AL side.

FYI, Mark Rutland posted the following preparatory series today:

https://lore.kernel.org/20241010101510.1487477-1-mark.rutland@arm.com

@joe-lawrence
Copy link
Contributor

Still only a WIP, but here is what the aarch64 implementation PR would look like on top of a separate patchable_function_entries PR:

joe-lawrence/kpatch@patchable_function_entries...joe-lawrence:kpatch:joes-arm64-rebase

(It passes the old aarch64 unit test objects that I created a few years ago :) but I haven't tried anything with a live kernel just yet.)

Still TODO is to untangle the authorship across a few rebases and then pull out the clang fixes separately. Those could also go in ASAP to minimize the baggage that this patchset has been carrying around.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants