-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make executable memory non writable in the core #1459
Conversation
Wipe previous proposal and build a new one on basis of #1468. |
Needs of rebase on top of #1468. Waiting for it to be merged before proceeding. |
Rebased (#1468 merged). |
core/arch/arm/mm/core_mmu_lpae.c
Outdated
@@ -356,7 +357,7 @@ static struct tee_mmap_region *init_xlation_table(struct tee_mmap_region *mm, | |||
|
|||
assert(level <= 3); | |||
|
|||
debug_print("New xlat table (level %u):", level); | |||
debug_print("New xlat table (level %u) [0x%8x]:", level, cur); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is "cur"?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oups. bug. That a copy paste from non-lpae. I understand I did not test the LPAE with the MMU debug traces enabled.
I will fix.
core/arch/arm/mm/core_mmu_lpae.c
Outdated
|
||
/* Clear table before use */ | ||
if (*table) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Assigning desc
with *table
here would make the following code a little bit easier and the call to virt_to_phys()
below could be performed only when a new table is allocated.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK, i will found a clean way.
core/arch/arm/mm/core_mmu_lpae.c
Outdated
xlat = phys_to_virt(*table & | ||
~DESC_ATTRS(*table), | ||
MEM_AREA_TEE_RAM); | ||
} else { | ||
DMSG("xlat used: %d/%d", next_xlat, MAX_XLAT_TABLES); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Indentation
core/arch/arm/mm/core_mmu_v7.c
Outdated
pg_idx++; | ||
} | ||
desc |= ((mm->pa & ~SECTION_MASK) + | ||
pg_idx * SMALL_PAGE_SIZE) | attr; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don't need to "or" in attr
as it's already assigned to desc
above.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
my fault. I forgot to clean it up since your previous comment.
I fix that.
core/arch/arm/include/arm64.h
Outdated
@@ -36,6 +36,8 @@ | |||
#define SCTLR_C BIT32(2) | |||
#define SCTLR_SA BIT32(3) | |||
#define SCTLR_I BIT32(12) | |||
#define SCTLR_WXN BIT32(19) | |||
#define SCTLR_UWXN BIT32(20) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can't find this bit defined in AArch64 state any where, where did you find it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
true! Exists only in AArch32! Thanks. Maybe to overcome issues when LPAE is no enabled?
I will drop SCTLR_UWXN
and update generic_entry_a64.S.
core/arch/arm/plat-vexpress/conf.mk
Outdated
@@ -32,8 +37,6 @@ endif | |||
|
|||
$(call force,CFG_GENERIC_BOOT,y) | |||
$(call force,CFG_GIC,y) | |||
$(call force,CFG_HWSUPP_MEM_PERM_PXN,y) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't these be set for all the vexpress platforms?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. Juno is an A53 and an A57 ? Why does it uses cortex-a15
for aarch32 implementation ?
Shouldn't we include the cortex-a53.mk file instead. Actually, I think I should merge cortex-a53.mk and cortex-a57.mk into a cortex-armv8-0.mk and follow ARMv8 minor versions to distinguish the cores.
Another solution is to simply add here
$(call force,CFG_HWSUPP_MEM_PERM_PXN,y)
$(call force,CFG_HWSUPP_MEM_PERM_WXN,y)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cortex-a15
is used because gcc didn't like cortex-a53
when the initial port was made, perhaps that has changed now. Juno is actually both A53 and A57.
core/arch/arm/kernel/kern.ld.S
Outdated
OUTPUT_FORMAT(CFG_KERN_LINKER_FORMAT) | ||
OUTPUT_ARCH(CFG_KERN_LINKER_ARCH) | ||
|
||
ENTRY(_start) | ||
SECTIONS | ||
{ | ||
. = CFG_TEE_LOAD_ADDR; | ||
. = ALIGN(SMALL_PAGE_SIZE); | ||
__core_unpg_rx_start = .; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The "core" part isn't needed and also inconsistent with all the other symbols defined here.
Isn't __core_unpg_rx_start
supposed to be exactly the same thing as __text_start
or am I missing something?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you're right. __core_unpg_rx_start
aims at being __text_start
.
I would tend to keep definition of each __core_(unpg|init)_r(x|o|w)_(start|end|size)
and add a PROVIDE(__core_unpg_rx_start = __text_start)
but maybe you would prefer to not introduce the useless _core_XXXX_rX_XXX
and use the already defined flag. I felt my way helped more understanding the mapping when reading 'kernel.ld.S'.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm basically OK with having redundant definitions as it helps understanding, but definitions that are supposed to be the same should be assigned at the same place. I think the new locations are better so please move __text_start
and friends.
@@ -50,13 +50,19 @@ | |||
|
|||
#include <platform_config.h> | |||
|
|||
#ifndef SMALL_PAGE_SIZE | |||
#define SMALL_PAGE_SIZE 4096 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since this is introduced, please update all the other cases here where we align on small page boundary.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok. it seems i was a bit lazy this time.
core/arch/arm/kernel/kern.ld.S
Outdated
@@ -250,6 +268,10 @@ SECTIONS | |||
__text_init_end = .; | |||
} | |||
|
|||
. = ALIGN(SMALL_PAGE_SIZE); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it that harmful to mix ro-data with code in a page?
If an attacker is able to select some address to jump to the code pages has to be so much have so much more potential to contain useful code any way.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
True. An alternate implementation of this P-R simply splits read-only and red-write memory. It can be easily done and could save some kbytes (dear pager!). But when pager is disabled, we don't expect to struggle for few kBytes.
I would not like a new configuration switch as CFG_CORE_MAPS_READONLY_EXECUTABLE.
It's about reducing attack surface. Should we cowardly restrict this, or nicely waste memory few maybe-useless extra hardening?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's silly, but I seem to be the only one that cares so never mind.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I understand your feeling about wasting memory and I fully agree with when pager is used. I will go for a CFG_CORE_RODATA_NONEXECUTABLE switch and default switch it off.
core/arch/arm/kernel/kern.ld.S
Outdated
. = ALIGN(8); | ||
|
||
/* | ||
* Insure init sections ends on mapping page |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Parts of the code in the init section is even after OP-TEE is fully initialized so it will never be paged out for good.
This aligning wastes memory for with the only win that a few lines of code can be removed from init_runtime()
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This alignment is required when separating executable memory from read-only memory.
Can onoly drop it if we discard the read-only/non-executable requirement.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Again I understand your concern about physical memory waste in paged configuration.
I will propose a CFG_CORE_RODATA_NONEXECUTABLE=y|n
.
One way to overcome this specific alignment could be to swap paged executable text and paged read-only-data. Would you agree with this change ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, sounds good.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This P-R build on buggy reuse of xlat tables.
I will submit in a specific P-R.
core/arch/arm/mm/core_mmu_lpae.c
Outdated
|
||
/* Recurse to fill in new table */ | ||
mm = init_xlation_table(mm, base_va, new_table, | ||
level + 1); | ||
mm = init_xlation_table(mm, base_va, xlat, level + 1); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is crappy. If a mm reference is leaves room for the next mm in the xlat, the increment table++
below craps the later reuse of same xlat table.
core/arch/arm/mm/core_mmu_v7.c
Outdated
@@ -712,7 +710,7 @@ static void map_memarea(struct tee_mmap_region *mm, uint32_t *ttb) | |||
mm->attr & TEE_MATTR_SECURE ? "S" : "NS"); | |||
|
|||
attr = mattr_to_desc(1, mm->attr | TEE_MATTR_TABLE); | |||
pa = map_page_memarea(mm); | |||
pa = map_page_memarea(mm, ttb[mm->va >> SECTION_SHIFT]); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This fails if the mm
reference covers several ttb entries.
This 2nd commit "core: reuse xlat table when possible" could be simplified. |
Rebased since no comment. Previous comments addressed and squashed. |
core/arch/arm/mm/core_mmu.c
Outdated
@@ -571,11 +571,19 @@ static void init_mem_map(struct tee_mmap_region *memory_map, size_t num_elems) | |||
assert(end <= CFG_TEE_RAM_START + CFG_TEE_RAM_VA_SIZE); | |||
|
|||
if (core_mmu_place_tee_ram_at_top(va)) { | |||
bool __maybe_unused va_is_secure = true; /* any init fits */ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Move outside of the block since it's also used in the else
part.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The 2 are unrelated (but from functional purpose. Yet, ok to move definition to function header.
core/arch/arm/mm/core_mmu_v7.c
Outdated
const char *str __maybe_unused) | ||
{ | ||
if (!(mm->attr & TEE_MATTR_VALID_BLOCK)) | ||
debug_print("%s [%08" PRIxVA " %08" PRIxVA "] not-mapped", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
s/not-mapped/not mapped/
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok
mm->attr & (TEE_MATTR_CACHE_CACHED << | ||
TEE_MATTR_CACHE_SHIFT) ? "MEM" : "DEV", | ||
mm->attr & TEE_MATTR_PW ? "RW" : "RO", | ||
mm->attr & TEE_MATTR_PX ? "X" : "XN", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
s/XN/NX/ maybe?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"XN" is the ARM convention for eXecute-Never. I would prefer to use it instead of a "NX".
Maybe I should rename CFG_CORE_RxDATA_NOEXEC
into CFG_CORE_RxDATA_XN
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
XN is ok. And I'm fine with _NOEXEC in config settings, it's easier to read I think.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, i'll keep xxx_NOEXEC then.
core/arch/arm/mm/core_mmu.c
Outdated
|
||
return NULL; | ||
found: | ||
return map_pa2va(mmap, pa); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would be more compact as:
mmap = find_map_by_type_and_pa(MEM_AREA_TEE_RAM, pa);
if (!mmap)
mmap = find_map_by_type_and_pa(MEM_AREA_TEE_RAM_RW, pa);
if (!mmap)
mmap = find_map_by_type_and_pa(MEM_AREA_TEE_RAM_RO, pa);
if (!mmap)
mmap = find_map_by_type_and_pa(MEM_AREA_TEE_RAM_RX, pa);
return map_pa2va(mmap, pa);
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok
core/arch/arm/include/mm/core_mmu.h
Outdated
@@ -145,6 +145,8 @@ static inline const char *teecore_memtype_name(enum teecore_memtypes type) | |||
return names[type]; | |||
} | |||
|
|||
#define MEM_AREA_TEE_RAM_DATA MEM_AREA_TEE_RAM |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wouldn't MEM_AREA_TEE_RAM_RW_DATA
be a better name? (I mean,the TEE core has a RO data section, too).
core/arch/arm/kernel/kern.ld.S
Outdated
|
||
.rodata : ALIGN(8) { | ||
__rodata_start = .; | ||
__rodata_start = .; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have no strong opinion on this, and it may be a matter of personal taste, but I'd rather keep an align statement here for clarity (although it's not strictly needed since there is one in the previous block). In fact, I like the previous syntax better (it's clear and compact).
Same below of course.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, i will restore the old way.
core/arch/arm/kernel/kern.ld.S
Outdated
* __vcore_/unpg_|ro_/start ordering: | __flatmap_unpg_ro_start / +size | ||
* \init \rw \size | __flatmap_unpg_rw_start / +size | ||
* v __flatmap_init_rx_start / +size | ||
* __flatmap_init_ro_start / +size |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It took me a few seconds to parse that ;) in fact I don't think this comment has much value, it just repeats what can be seen in the file. Same for sunxi.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok
core/arch/arm/mm/core_mmu.c
Outdated
|
||
return NULL; | ||
found: | ||
return map_pa2va(mmap, pa); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok
core/arch/arm/kernel/kern.ld.S
Outdated
* __vcore_/unpg_|ro_/start ordering: | __flatmap_unpg_ro_start / +size | ||
* \init \rw \size | __flatmap_unpg_rw_start / +size | ||
* v __flatmap_init_rx_start / +size | ||
* __flatmap_init_ro_start / +size |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok
core/arch/arm/kernel/kern.ld.S
Outdated
PROVIDE(__vcore_init_rx_start = 0); | ||
PROVIDE(__vcore_init_ro_start = 0); | ||
PROVIDE(__vcore_init_rx_size = 0); | ||
PROVIDE(__vcore_init_ro_size = 0); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will remove these definition as __vcore_init_xxx
are not used when CFG_WITH_PAGER
is disabled.
core/arch/arm/kernel/kern.ld.S
Outdated
|
||
.rodata : ALIGN(8) { | ||
__rodata_start = .; | ||
__rodata_start = .; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, i will restore the old way.
core/arch/arm/mm/core_mmu_v7.c
Outdated
const char *str __maybe_unused) | ||
{ | ||
if (!(mm->attr & TEE_MATTR_VALID_BLOCK)) | ||
debug_print("%s [%08" PRIxVA " %08" PRIxVA "] not-mapped", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok
mm->attr & (TEE_MATTR_CACHE_CACHED << | ||
TEE_MATTR_CACHE_SHIFT) ? "MEM" : "DEV", | ||
mm->attr & TEE_MATTR_PW ? "RW" : "RO", | ||
mm->attr & TEE_MATTR_PX ? "X" : "XN", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"XN" is the ARM convention for eXecute-Never. I would prefer to use it instead of a "NX".
Maybe I should rename CFG_CORE_RxDATA_NOEXEC
into CFG_CORE_RxDATA_XN
?
core/arch/arm/mm/core_mmu_v7.c
Outdated
l2 = core_mmu_alloc_l2(mm->size); | ||
else | ||
l2 = phys_to_virt(xlat & SECTION_PT_ATTR_MASK, | ||
MEM_AREA_TEE_RAM_DATA); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Need to check that core_mmu_alloc_l2()
or phys_to_virt()
returned a valid pointer.
core/arch/arm/kernel/kern.ld.S
Outdated
_end_of_ram = .; | ||
|
||
#ifndef CFG_WITH_PAGER | ||
. = ALIGN(SMALL_PAGE_SIZE); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
True, this is useless. I guess it is even useless to move this before _end_of_ram = .;
.
I will remove it.
core/arch/arm/kernel/kern.ld.S
Outdated
|
||
#ifdef CFG_WITH_PAGER | ||
/* | ||
* Core init mapping shall mapped up to end of physical RAM. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
s/shall mapped/shall cover/
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks
l2 = core_mmu_alloc_l2(mm->size); | ||
else | ||
l2 = phys_to_virt(xlat & SECTION_PT_ATTR_MASK, | ||
MEM_AREA_TEE_RAM_RW_DATA); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Need to check that l2
is a valid pointer
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You mean something like an assert(l2);
?
By construction, core_mmu_alloc_l2() returns a valid address or panics.
By construction, if xlat
is not null, it already stores a valid L2 table address (+ attributes/descc. type). map_page_memarea_in_pgdirs()
will assert that attributes are valid. Maybe, in this case, we could assert here that the attribute do match ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
An assert()
would be nice, together with a short explanation why l2
can't be NULL.
|
(rebased, squash on-going...) |
Oups ! I have squashed 2 commit into a single one. Commit comment for "core: label the bounds of core flat mapped memory" does not reflect the changes made. I will fix and push back. |
rebased, squash issue fixed, tag applied. |
rebased, |
rebased |
FYI, works on HiKey (64 bits, 32 bits, 32 bits with CFG_WITH_PAGER=y, 64 bits with CFG_CORE_RODATA_NOEXEC=y, 64 bits with CFG_CORE_RODATA_NOEXEC=n CFG_CORE_RWDATA_NOEXEC=n).
|
Thanks. I'll add my "tested-by": on qemu_virt and qemu_armv8, with several configs; pager (when applicable), rw-xn/ro-xn, debug, lpae/no-lpae (when applicable). |
Tested-by tags applied. |
...rebased... |
Dear all,
(edited: fix) |
I added a later patch to fix a weakness in the patch serie. EDITED: extra later patch not required => being reviewed (& hopefully merged) from #1547. Current PR only needs a rebase without any extra change (applied tags are still valid) once #1547 is merged. |
Before this patch, core static mapping did not insure level2 tables do not mix secure and non-secure virtual memory mappings. Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
When a level2 translation table is already used for a virtual mapping range, allow core to reuse it to extend mapping in the same virtual region. map_memarea() now maps a single given "memory map area" based on MMU root table knowledge. Each non-LPAE level2 table is reset to zero when allocated since the code now map_page_memarea() only the fills the mapped virtual range in the target MMU table. Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
Define new memory type IDs for the core private memory: - MEM_AREA_TEE_RAM_RX defines read-only/executable memory. - MEM_AREA_TEE_RAM_RO defines read-only/non-executable memory. - MEM_AREA_TEE_RAM_RW defines read/write/non-executable memory. Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
This change prepares split of executable and writable from rw memory. Upon configuration the core private memory "TEE RAM" (heap, stack, ...) will either be in TEE_RAM or TEE_RAM_RW. This MEM_AREA_TEE_RAM_RW_DATA abstracts the memory used for core data. Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
Use __init_start and __init_end to refer to "init" bounds. Removed unused __text_init_start/_end, __rodata_init_start/_end, ... Define SMALL_PAGE_SIZE locally to linker file kernel.ld.S. Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
Define labels to identify the core memory layout. These labels will be later use to define the mapping bounds between executable and writable memories. Update plat-sunxi/kernel.ld.S to match support of generic_boot.c. Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
Make executable memory non-writable and writable memory no-executable. Effective upon CFG_CORE_RWDATA_NOEXEC=y. Default configuration enables this directive. If CFG_CORE_RWDATA_NOEXEC is enabled, the read-only sections are mapped read-only/executable while the read/write memories are mapped read/write/not-executable. Potential 4KB of secure RAM wasted since the page alignment between unpaged text/rodata and unpaged read/write data. If CFG_CORE_RWDATA_NOEXEC not disabled, all text/rodata/data/... sections of the core are mapped read/write/executable. Both code and rodata and mapped together without alignment constraint. Hence define all "ro" are inside the "rx" relate area: __vcore_init_ro_size = 0 or init "ro" effective size. As init sections are mapped read-only, core won't be able to fill trailing content of the init last page. Hence __init_end and __init_size are page aligned. Core must premap all physical memory as readable to allow move of has tables to the allocated buffer during core inits. Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org> Tested-by: Jerome Forissier <jerome.forissier@linaro.org> (HiKey) Tested-by: Etienne Carriere <etienne.carriere@linaro.org> (qemu_virt) Tested-by: Etienne Carriere <etienne.carriere@linaro.org> (qemu_armv8) Tested-by: Etienne Carriere <etienne.carriere@linaro.org> (b2260)
CFG_CORE_RODATA_NOEXEC=y/n allows to map non-executable memory with a not-executable attribute. Added alignments that may waste secure memory: - unpaged text/rodata bound - init text/rodata bound To prevent wasting at least one page, the sections text_paged and rodata_paged are swapped in the memory layout. Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org> Tested-by: Jerome Forissier <jerome.forissier@linaro.org> (HiKey) Tested-by: Etienne Carriere <etienne.carriere@linaro.org> (qemu_virt) Tested-by: Etienne Carriere <etienne.carriere@linaro.org> (qemu_armv8) Tested-by: Etienne Carriere <etienne.carriere@linaro.org> (b2260)
rebased, travis happy. ready to be merged. |
Make executable memory non writable in the OP-TEE core static mapping
(P-R title changed: drop the RFC state)
-1 core reuses the mmu xlat table when possible (allows core to map inside xlat already mapped)
-2 new core rx/ro/rw memory, align of core (private ?) memory on attributes boundaries, apply to map
CFG_CORE_RWDATA_NOEXEC=y/n
Separate executable and writable memories. Defaults to 'y'.CFG_CORE_RODATA_NOEXEC=y/n
Separate executable and from rodata. Defaults to 'n'.-3 pager specific case: try to make rx pages non writable through the pager aliasing.=> postponed-4 Last: enable write-implies-execute-never in mmu (when applicable, updated plat configs)=> postponedHappy review...