Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

aligned_alloc requirements update. #681

Merged
merged 1 commit into from
Sep 28, 2024
Merged

Conversation

devnexen
Copy link
Collaborator

in addition, specs requires size to be a multiple of alignment.

@mjp41
Copy link
Member

mjp41 commented Sep 25, 2024

@bsdjhb and @jrtc27 is this change acceptable to you.

Allocate size bytes of uninitialized storage whose alignment is specified by alignment. The size parameter must be an integral multiple of alignment.
https://en.cppreference.com/w/c/memory/aligned_alloc

So this seems reasonable to check.

snmalloc doesn't actually care, and will do the correct thing if that is not the case, e.g.

aligned_alloc(96,64) will actually map to an allocation of 128 bytes with that alignment.

I do wonder if there should be a mitigations feature enforce libc spec, so that uses know when they are violating the spec, but for performance sensitive use cases, we do not need to take the instructions for checking these conditions. E.g. add something to:

* Randomize the location of the pagemap within a larger address space
* allocation. The other pages in that allocation may fault if accessed, on
* platforms that can efficiently express such configurations.
*
* This guards against adversarial attempts to access the pagemap.
*
* This is unnecessary on StrictProvenance architectures.
*/
constexpr mitigation::type random_pagemap{1 << 0};
/**
* Ensure that every slab (especially slabs used for larger "small" size
* classes) has a larger minimum number of objects and that a larger
* percentage of objects in a slab must be free to awaken the slab.
*
* This should frustrate use-after-reallocation attacks by delaying reuse.
* When combined with random_preserve, below, it additionally ensures that at
* least some shuffling of free objects is possible, and, hence, that there
* is at least some unpredictability of reuse.
*
* TODO: should this be split? mjp: Would require changing some thresholds.
* The min waking count needs to be ensure we have enough objects on a slab,
* hence is related to the min count on a slab. Currently we without this, we
* have min count of slab of 16, and a min waking count with this enabled
* of 32. So we would leak memory.
*/
constexpr mitigation::type random_larger_thresholds{1 << 1};
/**
*
* Obfuscate forward-edge pointers in intra-slab free lists.
*
* This helps prevent a UAF write from re-pointing the free list arbitrarily,
* as the de-obfuscation of a corrupted pointer will generate a wild address.
*
* This is not available on StrictProvenance architectures.
*/
constexpr mitigation::type freelist_forward_edge{1 << 2};
/**
* Store obfuscated backward-edge addresses in intra-slab free lists.
*
* Ordinarily, these lists are singly-linked. Storing backward-edges allows
* the allocator to verify the well-formedness of the links and, importantly,
* the acyclicity of the list itself. These backward-edges are also
* obfuscated in an attempt to frustrate an attacker armed with UAF
* attempting to construct a new well-formed list.
*
* Because the backward-edges are not traversed, this is available on
* StrictProvenance architectures, unlike freelist_forward_edge.
*
* This is required to detect double frees as it will break the doubly linked
* nature of the free list.
*/
constexpr mitigation::type freelist_backward_edge{1 << 3};
/**
* When de-purposing a slab (releasing its address space for reuse at a
* different size class or allocation), walk the free list and validate the
* domestication of all nodes along it.
*
* If freelist_forward_edge is also enabled, this will probe the
* domestication status of the de-obfuscated pointers before traversal.
* Each of domestication and traversal may probabilistically catch UAF
* corruption of the free list.
*
* If freelist_backward_edge is also enabled, this will verify the integrity
* of the free list links.
*
* This gives the allocator "one last chance" to catch UAF corruption of a
* slab's free list before the slab is de-purposed.
*
* This is required to comprehensively detect double free.
*/
constexpr mitigation::type freelist_teardown_validate{1 << 4};
/**
* When initializing a slab, shuffle its free list.
*
* This guards against attackers relying on object-adjacency or address-reuse
* properties of the allocation stream.
*/
constexpr mitigation::type random_initial{1 << 5};
/**
* When a slab is operating, randomly assign freed objects to one of two
* intra-slab free lists. When selecting a slab's free list for allocations,
* select the longer of the two.
*
* This guards against attackers relying on object-adjacency or address-reuse
* properties of the allocation stream.
*/
constexpr mitigation::type random_preserve{1 << 6};
/**
* Randomly introduce another slab for a given size-class, rather than use
* the last available to an allocator.
*
* This guards against attackers relying on address-reuse, especially in the
* pathological case of a size-class having only one slab with free entries.
*/
constexpr mitigation::type random_extra_slab{1 << 7};
/**
* Use a LIFO queue, rather than a stack, of slabs with free elements.
*
* This generally increases the time between address reuse.
*/
constexpr mitigation::type reuse_LIFO{1 << 8};
/**
* This performs a variety of inexpensive "sanity" tests throughout the
* allocator:
*
* - Requests to free objects must
* - not be interior pointers
* - be of allocated address space
* - Requests to free objects which also specify the size must specify a size
* that agrees with the current allocation.
*
* This guards gainst various forms of client misbehavior.
*
* TODO: Should this be split? mjp: It could, but let's not do this until
* we have performance numbers to see what this costs.
*/
constexpr mitigation::type sanity_checks{1 << 9};
/**
* On CHERI, perform a series of well-formedness tests on capabilities given
* when requesting to free an object.
*/
constexpr mitigation::type cheri_checks{1 << 10};
/**
* Erase intra-slab free list metadata before completing an allocation.
*
* This mitigates information disclosure.
*/
constexpr mitigation::type clear_meta{1 << 11};
/**
* Protect meta data blocks by allocating separate from chunks for
* user allocations. This involves leaving gaps in address space.
* This is less efficient, so should only be applied for the checked
* build.
*/
constexpr mitigation::type metadata_protection{1 << 12};
/**
* If this mitigation is enabled, then Pal implementations should provide
* exceptions/segfaults if accesses do not obey the
* - using
* - using_readonly
* - not_using
* model.
*/
static constexpr mitigation::type pal_enforce_access{1 << 13};

@nwf do you have thoughts on this.

@jrtc27
Copy link
Contributor

jrtc27 commented Sep 25, 2024

No objection to enforcing that. The OpenZFS case that motivated removing the overly-strict requirement does aligned_alloc(align, roundup2(sizeof(*zfs), align)); so conforms to that requirement in the standard.

Copy link
Contributor

@bsdjhb bsdjhb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@mjp41
Copy link
Member

mjp41 commented Sep 25, 2024

So I dug a bit deeper into this. We did use to have this assertion and removed it here:

#658

It seems that the spec was loosened by

N2072

This seems to be in the the N2310 draft for C17. So the spec does not require an allocator to fail on this case. I would sooner leave the check out as it will be faster.

@devnexen did you have a strong reason for wanting this?

@jrtc27
Copy link
Contributor

jrtc27 commented Sep 25, 2024

Looking at a C23 draft there (N3220) there is no restriction given for size, even (inheriting the same behaviour for 0 as malloc as that's hoisted out into the section introduction). So yeah whilst this restriction is fine for C11 it is wrong for newer standards.

@devnexen
Copy link
Collaborator Author

I would not mind then to enable this additional check only for C11 ?

@jrtc27
Copy link
Contributor

jrtc27 commented Sep 25, 2024

You have no idea what C standard the caller is compiled for.

@devnexen
Copy link
Collaborator Author

true true ... ok gonna drop this check then.

@mjp41 mjp41 merged commit f1df3d4 into microsoft:main Sep 28, 2024
52 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants