Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FG-KASLR #3

Closed
wants to merge 15 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@
*.gz
*.i
*.ko
*.ko.lds
*.lex.c
*.ll
*.lst
Expand Down
6 changes: 6 additions & 0 deletions Documentation/admin-guide/kernel-parameters.txt
Original file line number Diff line number Diff line change
Expand Up @@ -2235,6 +2235,12 @@
kernel and module base offset ASLR (Address Space
Layout Randomization).

nofgkaslr [KNL]
When CONFIG_FG_KASLR is set, this parameter
disables kernel function granular ASLR
(Address Space Layout Randomization).
See Documentation/security/fgkaslr.rst.

kasan_multi_shot
[KNL] Enforce KASAN (Kernel Address Sanitizer) to print
report on every invalid memory access. Without this
Expand Down
172 changes: 172 additions & 0 deletions Documentation/security/fgkaslr.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,172 @@
.. SPDX-License-Identifier: GPL-2.0

=====================================================================
Function Granular Kernel Address Space Layout Randomization (fgkaslr)
=====================================================================

:Date: 6 April 2020
:Author: Kristen Accardi

Kernel Address Space Layout Randomization (KASLR) was merged into the kernel
with the objective of increasing the difficulty of code reuse attacks. Code
reuse attacks reused existing code snippets to get around existing memory
protections. They exploit software bugs which expose addresses of useful code
snippets to control the flow of execution for their own nefarious purposes.
KASLR as it was originally implemented moves the entire kernel code text as a
unit at boot time in order to make addresses less predictable. The order of the
code within the segment is unchanged - only the base address is shifted. There
are a few shortcomings to this algorithm.

1. Low Entropy - there are only so many locations the kernel can fit in. This
means an attacker could guess without too much trouble.
2. Knowledge of a single address can reveal the offset of the base address,
exposing all other locations for a published/known kernel image.
3. Info leaks abound.

Finer grained ASLR has been proposed as a way to make ASLR more resistant
to info leaks. It is not a new concept at all, and there are many variations
possible. Function reordering is an implementation of finer grained ASLR
which randomizes the layout of an address space on a function level
granularity. The term "fgkaslr" is used in this document to refer to the
technique of function reordering when used with KASLR, as well as finer grained
KASLR in general.

The objective of this patch set is to improve a technology that is already
merged into the kernel (KASLR). This code will not prevent all code reuse
attacks, and should be considered as one of several tools that can be used.

Implementation Details
======================

The over-arching objective of the fgkaslr implementation is incremental
improvement over the existing KASLR algorithm. It is designed to work with
the existing solution, and there are two main area where code changes occur:
Build time, and Load time.

Build time
----------

GCC has had an option to place functions into individual .text sections
for many years now (-ffunction-sections). This option is used to implement
function reordering at load time. The final compiled vmlinux retains all the
section headers, which can be used to help find the address ranges of each
function. Using this information and an expanded table of relocation addresses,
individual text sections can be shuffled immediately after decompression.
Some data tables inside the kernel that have assumptions about order
require sorting after the update. In order to modify these tables,
a few key symbols from the objcopy symbol stripping process are preserved
for use after shuffling the text segments. Any special input sections which are
defined by the kernel build process and collected into the .text output
segment are left unmodified and will still be present inside the .text segment,
unrandomized other than normal base address randomization.

Load time
---------

The boot kernel was modified to parse the vmlinux elf file after
decompression to check for symbols for modifying data tables, and to
look for any .text.* sections to randomize. The sections are then shuffled,
and tables are updated or resorted. The existing code which updated relocation
addresses was modified to account for not just a fixed delta from the load
address, but the offset that the function section was moved to. This requires
inspection of each address to see if it was impacted by a randomization.

In order to hide the new layout, symbols reported through /proc/kallsyms will
be displayed in a random order.

Performance Impact
==================

There are two areas where function reordering can impact performance: boot
time latency, and run time performance.

Boot time latency
-----------------

This implementation of finer grained KASLR impacts the boot time of the kernel
in several places. It requires additional parsing of the kernel ELF file to
obtain the section headers of the sections to be randomized. It calls the
random number generator for each section to be randomized to determine that
section's new memory location. It copies the decompressed kernel into a new
area of memory to avoid corruption when laying out the newly randomized
sections. It increases the number of relocations the kernel has to perform at
boot time vs. standard KASLR, and it also requires a lookup on each address
that needs to be relocated to see if it was in a randomized section and needs
to be adjusted by a new offset. Finally, it re-sorts a few data tables that
are required to be sorted by address.

Booting a test VM on a modern, well appointed system showed an increase in
latency of approximately 1 second.

Run time
--------

The performance impact at run-time of function reordering varies by workload.
Randomly reordering the functions will cause an increase in cache misses
for some workloads. Some workloads perform significantly worse under FGKASLR,
while others stay the same or even improve. In general, it will depend on the
code flow whether or not finer grained KASLR will impact a workload, and how
the underlying code was designed. Because the layout changes per boot, each
time a system is rebooted the performance of a workload may change.

Image Size
==========

fgkaslr increases the size of the kernel binary due to the extra section
headers that are included, as well as the extra relocations that need to
be added. You can expect fgkaslr to increase the size of the resulting
vmlinux by about 3%, and the compressed image (bzImage) by 15%.

Memory Usage
============

fgkaslr increases the amount of heap that is required at boot time,
although this extra memory is released when the kernel has finished
decompression. As a result, it may not be appropriate to use this feature
on systems without much memory.

Building
========

To enable fine grained KASLR, you need to have the following config options
set (including all the ones you would use to build normal KASLR)

``CONFIG_FG_KASLR=y``

fgkaslr for the kernel is only supported for the X86_64 architecture.

Modules
=======

Modules are randomized similarly to the rest of the kernel by shuffling
the sections at load time prior to moving them into memory. The module must
also have been build with the -ffunction-sections compiler option.

Although fgkaslr for the kernel is only supported for the X86_64 architecture,
it is possible to use fgkaslr with modules on other architectures. To enable
this feature, select the following config option:

``CONFIG_MODULE_FG_KASLR``

This option is selected automatically for X86_64 when CONFIG_FG_KASLR is set.

Disabling
=========

Disabling normal kaslr using the nokaslr command line option also disables
fgkaslr. In addition, it is possible to disable fgkaslr separately by booting
with "nofgkaslr" on the commandline.

Further Information
===================

There are a lot of academic papers which explore finer grained ASLR.
This paper in particular contributed significantly to the implementation design.

Selfrando: Securing the Tor Browser against De-anonymization Exploits,
M. Conti, S. Crane, T. Frassetto, et al.

For more information on how function layout impacts performance, see:

Optimizing Function Placement for Large-Scale Data-Center Applications,
G. Ottoni, B. Maher
1 change: 1 addition & 0 deletions Documentation/security/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@ Security Documentation

credentials
IMA-templates
fgkaslr
keys/index
lsm
lsm-development
Expand Down
12 changes: 12 additions & 0 deletions MAINTAINERS
Original file line number Diff line number Diff line change
Expand Up @@ -7925,6 +7925,18 @@ L: platform-driver-x86@vger.kernel.org
S: Maintained
F: drivers/platform/x86/fujitsu-tablet.c

FUNCTION-GRAINED KASLR (FG-KASLR)
M: Alexander Lobakin <alexandr.lobakin@intel.com>
R: Kristen Carlson Accardi <kristen@linux.intel.com>
R: Kees Cook <keescook@chromium.org>
L: linux-hardening@vger.kernel.org
S: Supported
F: Documentation/security/fgkaslr.rst
F: arch/x86/boot/compressed/fgkaslr.c
F: arch/x86/boot/compressed/gen-symbols.h
F: arch/x86/boot/compressed/utils.c
F: scripts/generate_text_sections.pl

FUSE: FILESYSTEM IN USERSPACE
M: Miklos Szeredi <miklos@szeredi.hu>
L: linux-fsdevel@vger.kernel.org
Expand Down
41 changes: 40 additions & 1 deletion Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -871,8 +871,47 @@ ifdef CONFIG_DEBUG_SECTION_MISMATCH
KBUILD_CFLAGS += -fno-inline-functions-called-once
endif

# Prefer linking with the `-z unique-symbol` if available, this eliminates
# position-based search. Also is a requirement for FG-KASLR
ifeq ($(CONFIG_LD_HAS_Z_UNIQUE_SYMBOL)$(CONFIG_LIVEPATCH),yy)
KBUILD_LDFLAGS += -z unique-symbol
endif

# Allow ASM code to generate separate sections for each function. See
# `include/linux/linkage.h` for explanation. This flag is to enable GAS to
# insert the name of the previous section instead of `%S` inside .pushsection
ifdef CONFIG_HAVE_ASM_FUNCTION_SECTIONS
ifneq ($(CONFIG_LD_DEAD_CODE_DATA_ELIMINATION)$(CONFIG_LTO_CLANG)$(CONFIG_FG_KASLR),)
SECSUBST_AFLAGS := -Wa,--sectname-subst
KBUILD_AFLAGS_KERNEL += $(SECSUBST_AFLAGS)
KBUILD_CFLAGS_KERNEL += $(SECSUBST_AFLAGS)
export SECSUBST_AFLAGS
endif

# Same for modules. LD DCE doesn't work for them, thus not checking for it
ifneq ($(CONFIG_MODULE_FG_KASLR)$(CONFIG_LTO_CLANG),)
KBUILD_AFLAGS_MODULE += -Wa,--sectname-subst
KBUILD_CFLAGS_MODULE += -Wa,--sectname-subst
endif
endif # CONFIG_HAVE_ASM_FUNCTION_SECTIONS

# ClangLTO implies `-ffunction-sections -fdata-sections`, no need
# to specify them manually and trigger a pointless full rebuild
ifndef CONFIG_LTO_CLANG
ifdef CONFIG_MODULE_FG_KASLR
KBUILD_CFLAGS_MODULE += -ffunction-sections
endif

ifneq ($(CONFIG_LD_DEAD_CODE_DATA_ELIMINATION)$(CONFIG_FG_KASLR),)
KBUILD_CFLAGS_KERNEL += -ffunction-sections
endif

ifdef CONFIG_LD_DEAD_CODE_DATA_ELIMINATION
KBUILD_CFLAGS_KERNEL += -fdata-sections
endif
endif # CONFIG_LTO_CLANG

ifdef CONFIG_LD_DEAD_CODE_DATA_ELIMINATION
KBUILD_CFLAGS_KERNEL += -ffunction-sections -fdata-sections
LDFLAGS_vmlinux += --gc-sections
endif

Expand Down
10 changes: 10 additions & 0 deletions arch/Kconfig
Original file line number Diff line number Diff line change
Expand Up @@ -1322,6 +1322,16 @@ config DYNAMIC_SIGFRAME
config HAVE_ARCH_NODE_DEV_GROUP
bool

config ARCH_SUPPORTS_ASM_FUNCTION_SECTIONS
bool
help
An arch should select this if it can be built and run with its
asm functions placed into separate sections to improve DCE, LTO
and FG-KASLR.

config ARCH_SUPPORTS_FG_KASLR
bool

source "kernel/gcov/Kconfig"

source "scripts/gcc-plugins/Kconfig"
Expand Down
2 changes: 2 additions & 0 deletions arch/x86/Kconfig
Original file line number Diff line number Diff line change
Expand Up @@ -102,8 +102,10 @@ config X86
select ARCH_MIGHT_HAVE_PC_SERIO
select ARCH_STACKWALK
select ARCH_SUPPORTS_ACPI
select ARCH_SUPPORTS_ASM_FUNCTION_SECTIONS
select ARCH_SUPPORTS_ATOMIC_RMW
select ARCH_SUPPORTS_DEBUG_PAGEALLOC
select ARCH_SUPPORTS_FG_KASLR if X86_64 && RANDOMIZE_BASE
select ARCH_SUPPORTS_PAGE_TABLE_CHECK if X86_64
select ARCH_SUPPORTS_NUMA_BALANCING if X86_64
select ARCH_SUPPORTS_KMAP_LOCAL_FORCE_MAP if NR_CPUS <= 4096
Expand Down
1 change: 1 addition & 0 deletions arch/x86/boot/Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -68,6 +68,7 @@ targets += cpustr.h

KBUILD_CFLAGS := $(REALMODE_CFLAGS) -D_SETUP
KBUILD_AFLAGS := $(KBUILD_CFLAGS) -D__ASSEMBLY__
KBUILD_AFLAGS += $(SECSUBST_AFLAGS)
KBUILD_CFLAGS += $(call cc-option,-fmacro-prefix-map=$(srctree)/=)
KBUILD_CFLAGS += -fno-asynchronous-unwind-tables
GCOV_PROFILE := n
Expand Down
1 change: 1 addition & 0 deletions arch/x86/boot/compressed/.gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -3,5 +3,6 @@ relocs
vmlinux.bin.all
vmlinux.relocs
vmlinux.lds
vmlinux.symbols
mkpiggy
piggy.S
21 changes: 19 additions & 2 deletions arch/x86/boot/compressed/Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -58,6 +58,7 @@ KBUILD_CFLAGS += -include $(srctree)/include/linux/hidden.h
CFLAGS_sev.o += -I$(objtree)/arch/x86/lib/

KBUILD_AFLAGS := $(KBUILD_CFLAGS) -D__ASSEMBLY__
KBUILD_AFLAGS += $(SECSUBST_AFLAGS)
GCOV_PROFILE := n
UBSAN_SANITIZE :=n

Expand Down Expand Up @@ -92,6 +93,7 @@ vmlinux-objs-y := $(obj)/vmlinux.lds $(obj)/kernel_info.o $(obj)/head_$(BITS).o

vmlinux-objs-$(CONFIG_EARLY_PRINTK) += $(obj)/early_serial_console.o
vmlinux-objs-$(CONFIG_RANDOMIZE_BASE) += $(obj)/kaslr.o
vmlinux-objs-$(CONFIG_FG_KASLR) += $(obj)/fgkaslr.o $(obj)/utils.o
ifdef CONFIG_X86_64
vmlinux-objs-y += $(obj)/ident_map_64.o
vmlinux-objs-y += $(obj)/idt_64.o $(obj)/idt_handlers_64.o
Expand All @@ -109,14 +111,29 @@ $(obj)/vmlinux: $(vmlinux-objs-y) $(efi-obj-y) FORCE
$(call if_changed,ld)

OBJCOPYFLAGS_vmlinux.bin := -R .comment -S
$(obj)/vmlinux.bin: vmlinux FORCE

targets += vmlinux.symbols

ifdef CONFIG_FG_KASLR
quiet_cmd_vmlinux_symbols = GEN $@
cmd_vmlinux_symbols = $(CPP) $(cpp_flags) -P -D"GEN(s)"=s -o $@ $<

VMLINUX_SYMBOLS = $(obj)/vmlinux.symbols
$(VMLINUX_SYMBOLS): $(src)/gen-symbols.h FORCE
$(call if_changed_dep,vmlinux_symbols)

OBJCOPYFLAGS += --keep-symbols=$(VMLINUX_SYMBOLS)
RELOCS_ARGS += --fg-kaslr
endif # CONFIG_FG_KASLR

$(obj)/vmlinux.bin: vmlinux $(VMLINUX_SYMBOLS) FORCE
$(call if_changed,objcopy)

targets += $(patsubst $(obj)/%,%,$(vmlinux-objs-y)) vmlinux.bin.all vmlinux.relocs

CMD_RELOCS = arch/x86/tools/relocs
quiet_cmd_relocs = RELOCS $@
cmd_relocs = $(CMD_RELOCS) $< > $@;$(CMD_RELOCS) --abs-relocs $<
cmd_relocs = $(CMD_RELOCS) $(RELOCS_ARGS) $< > $@;$(CMD_RELOCS) $(RELOCS_ARGS) --abs-relocs $<
$(obj)/vmlinux.relocs: vmlinux FORCE
$(call if_changed,relocs)

Expand Down
Loading